I'm adding library support for common term-expansion workflows (1). Currently, I have defined a "set" workflow, where sets of term-expansion rules (2) are tried until one of them succeeds, and a "pipeline" workflow, where expansion results from a set of term-expansion rules are passed to the next set in the pipeline. I wonder if there are other sensible term-expansion workflows that, even if less common, have practical uses and are thus still worth of library support.
(1) For Logtalk, the current versions can be found at:
https://github.com/LogtalkDotOrg/logtalk3/blob/master/library/hook_pipeline.lgt
https://github.com/LogtalkDotOrg/logtalk3/blob/master/library/hook_set.lgt
(2) A set of expansion rules is to be understood in this context as a set of clauses for the term_expansion/2 user-defined hook predicate (also possibly the goal_expansion/2 user-defined hook predicate, although this is less likely given the fixed point semantics used for goal-expansion) defined in a Prolog module or a Logtalk object (other than the user pseudo-module/object).
The fixpoint is both already a set and a pipeline on a certain level during expansion. Its expand_term/2 is just the transitive closure of the one step term_expansion/2 clauses. But this works only during decending into a term, in my opinion we also need something when assembling the term again.
In rare cases this transitive closure even needs the (==)/2 check as is found in some Prolog systems. Most likely it can simply stop if none of the term_expansions/2 do anything. So we have basically, without the (==)/2 check:
expand_term(X, Y) :- term_expansion(X, H), !, expand_term(H, Y).
expand_term(.. ..) :- /* possibly decend into meta predicate */
But what I would like to see is a kind of a simplification framework added to the expansion framework. So when we decend into a meta predicate and come back, we should call a simplification hook.
It is in accordance with some theories of term rewriting, which say: the normal form (nf) of a compound is a function of the normal form of its parts. The expansion framework would thus not deal with normal forms, only deliver redefinitions of predicates, but the simplification framework would do the normal form work:
nf(f(t_1,..,t_n)) --> f'(nf(t_1),..nf(t_n))
So the simplification hook would take f(nf(t_1), .., nf(t_n)), assuming that expand_term when decending into a meta predicate yields already nf(t_1) .. nf(t_n) for the meta arguments, and then simply give f(nf(t_1), .., nf(t_n)) to the simplifier.
The simplifier would then return f'(nf(t_1), .., nf(t_n)), namely do its work and return a simplified form, based on the assumption that the arguments are already simplified. Such a simplifier can be quite powerful. Jekejeke Prolog delivers such as stage after expansion.
The Jekejeke Prolog simplifier and its integraton into the expansion framework is open source here and here. It is for example used to reorder conjunction, here are the example rules for this purpose:
/* Predefined Simplifications */
sys_goal_simplification(( ( A, B), C), J) :-
sys_simplify_goal(( B, C), H),
sys_simplify_goal(( A, H), J).
Example:
(((a, b), c), d) --> (a, (b, (c, d)))
The Jekejeke Prolog simplifier is extremly efficient, since it can work with the assumption that it receives already normalized terms. It will not unnecessarely repeatedly do pattern matching over the whole given term.
But it needs some rewriting system common practice to write simplfication rules. A simplficiation rule should call simplification when ever it constructs a new term.
In the example above these are the two sys_simplify_goal/2 calls, we do for example not simply return a new term with (B,C) in it, as an expansion rule would do. Since (B,C) was not part of the normalized arguments for sys_goal_simplification/2, we have to normalize it first.
But since the simplifier framework is interwined with the expansion framework, I doubt that it can be called a workflow architecture. There is no specific flow direction, the result is rather a ping pong. Nevertheless the simplfication framework can be used in a modular way.
The Jekejeke Prolog simplifier is also used in the forward chaining clause rewriting. There it does generate from one forward clause multiple delta computation clauses.
Bye
Related
Main features
I recently have been looking to make a Prolog meta-interpreter with a certain set of features, but I am starting to see that I don't have the theoretical knowledge to work on it.
The features are as follows :
Depth-first search.
Interprets any non-recursive Prolog program the same way a classic interpreter would.
Guarantees breaking out of any infinite recursion. This most-likely means breaking Turing-completeness, and I'm okay with that.
As long as each step of the recursion reduces the complexity of the expression, keep evaluating it. To be more specific, I want predicates to be allowed to call themselves, but I want to prevent a clause to be able to call a similarly or more complex version of itself.
Obviously, (3) and (4) are the ones I am having problems with. I am not sure if those 2 features are compatible. I am not even sure if there is a way to define complexity such that (4) makes logical sense.
In my researches, I have come across the concept of "unavoidable pattern", which, I believe, provides a way to ensure feature (3), as long as feature (4) has a well-formed definition.
I specifically want to know if this kind of interpreter has been proven impossible, and, if not, if theoretical or concrete work on similar interpreters has been done in the past.
Extra features
Provided the above features are possible to implement, I have extra features I want to add, and would be grateful if you could enlighten me on the feasibility of such features as well :
Systematically characterize and describe those recursions, such that, when one is detected, a user-defined predicate or clause could be called that matches this specific form of recursion.
Detect patterns that result in an exponential number of combinatorial choices, prevent evaluation, and characterize them in the same way as step (5), such that they can be handled by a built-in or user-defined predicate.
Example
Here is a simple predicate that obviously results in infinite recursion in a normal Prolog interpreter in all but the simplest of cases. This interpreter should be able to evaluate it in at most PSPACE (and, I believe, at most P if (6) is possible to implement), while still giving relevant results.
eq(E, E).
eq(E, F):- eq(E,F0), eq(F0,F).
eq(A + B, AR + BR):- eq(A, AR), eq(B, BR).
eq(A + B, B + A).
eq(A * B, B * A).
eq((A * B) / B, A).
eq(E, R):- eq(R, E).
Example of results expected :
?- eq((a + c) + b, b + (c + a)).
true.
?- eq(a, (a * b) / C).
C = b.
The fact that this kind of interpreter might prove useful by the provided example hints me towards the fact that such an interpreter is probably impossible, but, if it is, I would like to be able to understand what makes it impossible (for example, does (3) + (4) reduce to the halting problem? is (6) NP?).
If you want to guarantee termination you can conservatively assume any input goal is nonterminating until proven otherwise, using a decidable proof procedure. Basically, define some small class of goals which you know halt, and expand it over time with clever ideas.
Here are three examples, which guarantee or force three different kinds of termination respectively (also see the Power of Prolog chapter on termination):
existential-existential: at least one answer is reached before potentially diverging
universal-existential: no branches diverge but there may be an infinite number of them, so the goal may not be universally terminating
universal-universal: after a finite number of steps, every answer will be reached, so in particular there must be a finite number of answers
In the following, halts(Goal) is assumed to correctly test a goal for existential-existential termination.
Existential-Existential
This uses halts/1 to prove existential termination of a modest class of goals. The current evaluator eval/1 just falls back to the underlying engine:
halts(halts(_)).
eval(Input) :- Input.
:- \+ \+ halts(halts(eval(_))).
safe_eval(Input) :-
halts(eval(Input)),
eval(Input).
?- eval((repeat, false)).
C-c C-cAction (h for help) ? a
abort
% Execution Aborted
?- safe_eval((repeat, false)).
false.
The optional but highly recommended goal directive \+ \+ halts(halts(eval(_))) ensures that halts will always halt when run on eval applied to anything.
The advantage of splitting the problem into a termination checker and an evaluator is that the two are decoupled: you can use any evaluation strategy you want. halts can be gradually augmented with more advanced methods to expand the class of allowed goals, which frees up eval to do the same, e.g. clause reordering based on static/runtime mode analysis, propagating failure, etc. eval can itself expand the class of allowed goals by improving termination properties which are understood by halts.
One caveat - inputs which use meta-logical predicates like var/1 could subvert the goal directive. Maybe just disallow such predicates at first, and again relax the restriction over time as you discover safe patterns of use.
Universal-Existential
This example uses a meta-interpreter, adapted from the Power of Prolog chapter on meta-interpreters, to prune off branches which can't be proven to existentially terminate:
eval(true).
eval((A,B)) :- eval(A), eval(B).
eval((A;_)) :- halts(eval(A)), eval(A).
eval((_;B)) :- halts(eval(B)), eval(B).
eval(g(Head)) :-
clause(Head, Body),
halts(eval(Body)),
eval(Body).
So here we're destroying branches, rather than refusing to evaluate the goal.
For improved efficiency, you could start by naively evaluating the input goal and building up per-branch sets of visited clause bodies (using e.g. clause/3), and only invoke halts when you are about to revisit a clause in the same branch.
Universal-Universal
The above meta-interpreter rules out at least all the diverging branches, but may still have an infinite number of individually terminating branches. If we want to ensure universal termination we can again do everything before entering eval, as in the existential-existential variation:
...
:- \+ \+ halts(halts(\+ \+ eval(_))).
...
safe_eval(Input) :-
halts(\+ \+ eval(Input)),
eval(Input).
So we're just adding in universal quantification.
One interesting thing you could try is running halts itself using eval. This could yield speedups, better termination properties, or qualitatively new capabilities, but would of course require the goal directive and halts to be written according to eval's semantics. E.g. if you remove double negations then \+ \+ would not universally quantify, and if you propagate false or otherwise don't conform to the default left-to-right strategy then the (goal, false) test for universal termination (PoP chapter on termination) also would not work.
The maplist/3 predicate has the following form
maplist(:Goal, ?List1, ?List2)
However the very similar function findall/3 has the form
findall(+Template, :Goal, -Bag)
Not only does it have a goal but a template as well. I've found this template to be quite useful in a number of places and began to wonder why maplist/3 doesn't have one.
Why doesn't maplist/3 have a template argument while findall/3 does? What is the salient difference between these predicates?
Templates as in findall/3, setof/3, and bagof/3 are an attempt to simulate proper quantifications with Prolog's variables. Most of the time (and here in all three cases) they involve explicit copying of those terms within the template.
For maplist/3 such mechanisms are not always necessary since the actual quantification is here about the lists' elements only. Commonly, no further modification happens. Instead of using templates, the first argument of maplist/3 is an incomplete goal that lacks two further arguments.
maplist(Goal_2, Xs, Ys).
If you insist, you can get exactly your template version using library(lambda):
templmaplist(Template1, Template2, Goal_0, Xs, Ys) :-
maplist(\Template1^Template2^Goal_0, Xs, Ys).
(Note that I avoid calling this maplist/5, since this is already defined with another meaning)
In general, I rather avoid making "my own templates" since this leads so easily to misunderstandings (already between me and me): The arguments are not the pure relational arguments one is usually expecting. By using (\)/1 instead, the local variables are somewhat better handled and more visible as being special.
... ah, and there is another good reason to rather avoid templates: They actually force you to always take into account some less-than-truly-pure mechanism as copying. This means that your program may expose some anomalies w.r.t. monotonicity. You really have to look into the very details.
On the other hand without templates, as long as there is no copying involved, even your higher-order predicates will maintain monotonicity like a charm.
Considering your concrete example will make clear why a template is not needed for maplist/3:
In maplist/N and other higher-order predicates, you can use currying to fix a particular argument.
For example, you can write the predicate:
p(Z, X, Y) :-
Z #= X + Y.
And now your example works exactly as expected without the need for a template:
?- maplist(p(1), [1,2,3,4], [0,-1,-2,-3]).
true.
You can use library(lambda) to dynamically reorder arguments, to make this even more flexible.
What is the salient difference between these predicates?
findall/3 (and family, setof/3 and bagof/3) cannot be implemented in pure Prolog (the monotonic subset without side effects), while maplist/N is simply a kind of 'macro', implementing boilerplate list(s) visit.
In maplist/N nothing is assumed about the determinacy of the predicate, since the execution flow is controlled by the list(s) pattern(s). findall/3 it's a list constructor, and it's essential the goal terminate, and (I see) a necessity to indicate what to retain of every succeeded goal invocation.
There are several interesting problems with co-routing. For example we want to reclaim unreached frozen goals. But there is a problem for Prolog systems that don't support cyclic terms. Namely a freeze:
?- freeze(V, p(...V...)).
leads to a loop in the internal data structure. A simple workaround would be currying the frozen goal. Thus instead of working with a predicate freeze/2, we would work with a predicate guard/2, which could be defined as follows:
guard(V, C) :- freeze(V, call(C, V)).
But how could we define freeze/2 in terms of guard/2? The obvious definition doesn't work, since it doesn't introduce a new variable, and we still have the problem that the closure contains V (assuming a lambda library where (\)/2 is the lambda abstraction):
freeze(V, G) :- guard(V, V\G).
Bye
I often end up writing code in Prolog which involves some arithmetic calculation (or state information important throughout the program), by means of first obtaining the value stored in a predicate, then recalculating the value and finally storing the value using retractall and assert because in Prolog we cannot assign values to variable twice using is (thus making almost every variable that needs modification, global). I have come to know that this is not a good practice in Prolog. In this regard I would like to ask:
Why is it a bad practice in Prolog (though i myself don't like to go through the above mentioned steps just to have have a kind of flexible (modifiable) variable)?
What are some general ways to avoid this practice? Small examples will be greatly appreciated.
P.S. I just started learning Prolog. I do have programming experience in languages like C.
Edited for further clarification
A bad example (in win-prolog) of what I want to say is given below:
:- dynamic(value/1).
:- assert(value(0)).
adds :-
value(X),
NewX is X + 4,
retractall(value(_)),
assert(value(NewX)).
mults :-
value(Y),
NewY is Y * 2,
retractall(value(_)),
assert(value(NewY)).
start :-
retractall(value(_)),
assert(value(3)),
adds,
mults,
value(Q),
write(Q).
Then we can query like:
?- start.
Here, it is very trivial, but in real program and application, the above shown method of global variable becomes unavoidable. Sometimes the list given above like assert(value(0))... grows very long with many more assert predicates for defining more variables. This is done to make communication of the values between different functions possible and to store states of variables during the runtime of program.
Finally, I'd like to know one more thing:
When does the practice mentioned above become unavoidable in spite of various solutions suggested by you to avoid it?
The general way to avoid this is to think in terms of relations between states of your computations: You use one argument to hold the state that is relevant to your program before a calculation, and a second argument that describes the state after some calculation. For example, to describe a sequence of arithmetic operations on a value V0, you can use:
state0_state(V0, V) :-
operation1_result(V0, V1),
operation2_result(V1, V2),
operation3_result(V2, V).
Notice how the state (in your case: the arithmetic value) is threaded through the predicates. The naming convention V0 -> V1 -> ... -> V scales easily to any number of operations and helps to keep in mind that V0 is the initial value, and V is the value after the various operations have been applied. Each predicate that needs to access or modify the state will have an argument that allows you to pass it the state.
A huge advantage of threading the state through like this is that you can easily reason about each operation in isolation: You can test it, debug it, analyze it with other tools etc., without having to set up any implicit global state. As another huge benefit, you can then use your programs in more directions provided you are using sufficiently general predicates. For example, you can ask: Which initial values lead to a given outcome?
?- state0_state(V0, given_outcome).
This is of course not readily possible when using the imperative style. You should therefore use constraints instead of is/2, because is/2 only works in one direction. Constraints are much easier to use and a more general modern alternative to low-level arithmetic.
The dynamic database is also slower than threading states through in variables, because it performs indexing etc. on each assertz/1.
1 - it's bad practice because destroys the declarative model that (pure) Prolog programs exhibit.
Then the programmer must think in procedural terms, and the procedural model of Prolog is rather complicate and difficult to follow.
Specifically, we must be able to decide about the validity of asserted knowledge while the programs backtracks, i.e. follow alternative paths to those already tried, that (maybe) caused the assertions.
2 - We need additional variables to keep the state. A practical, maybe not very intuitive way, is using grammar rules (a DCG) instead of plain predicates. Grammar rules are translated adding two list arguments, normally hidden, and we can use those arguments to pass around the state implicitly, and reference/change it only where needed.
A really interesting introduction is here: DCGs in Prolog by Markus Triska. Look for Implicitly passing states around: you'll find this enlighting small example:
num_leaves(nil), [N1] --> [N0], { N1 is N0 + 1 }.
num_leaves(node(_,Left,Right)) -->
num_leaves(Left),
num_leaves(Right).
More generally, and for further practical examples, see Thinking in States, from the same author.
edit: generally, assert/retract are required only if you need to change the database, or keep track of computation result along backtracking. A simple example from my (very) old Prolog interpreter:
findall_p(X,G,_):-
asserta(found('$mark')),
call(G),
asserta(found(X)),
fail.
findall_p(_,_,N) :-
collect_found([],N),
!.
collect_found(S,L) :-
getnext(X),
!,
collect_found([X|S],L).
collect_found(L,L).
getnext(X) :-
retract(found(X)),
!,
X \= '$mark'.
findall/3 can be seen as the basic all solutions predicate. That code should be the very same from Clockins-Mellish textbook - Programming in Prolog. I used it while testing the 'real' findall/3 I implemented. You can see that it's not 'reentrant', because of the '$mark' aliased.
Common Lisp allows exception handling through conditions and restarts. In rough terms, when a function throws an exception, the "catcher" can decide how/whether the "thrower" should proceed. Does Prolog offer a similar system? If not, could one be built on top of existing predicates for walking and examining the call stack?
The ISO/IEC standard of Prolog provides only a very rudimentary exception and error handling mechanism which is - more or less - comparable to what Java offers and far away from Common Lisp's rich mechanism, but there are still some points worth noting. In particular, beside the actual signalling and handling mechanism, many systems provide a mechanism similar to unwind-protect. That is, a way to ensure that a goal will be executed, even in the presence of otherwise unhandled signals.
ISO throw/1, catch/3
An exception is raised/thrown with throw(Term). First a copy of Term is created with copy_term/2 lets call it Termcopy and then this new copy is used to search for a corresponding catch(Goal, Pattern, Handler) whose second argument unifies with Termcopy. When Handler is executed, all unifications caused by Goal are undone. So there is no way for the Handler to access the substitutions present when throw/1 is executed. And there is no way to continue at the place where the throw/1 was executed.
Errors of built-in predicates are signaled by executing throw(error(Error_term, Imp_def)) where Error_term corresponds to one of ISO's error classes and Imp_def may provide implementation defined extra information (like source file, line number etc).
There are many cases where handling an error locally would be of great benefit but it is deemed by many implementors to be too complex to implement.
The additional effort to make a Prolog processor handle each and every error locally is quite considerable and is much larger than in Common Lisp or other programming languages. This is due to the very nature of unification in Prolog. The local handling of an error would require to undo unifications performed during the execution of the built-in: An implementor has thus two possibilities to implement this:
create a "choice point" at the time of invoking a built-in predicate, this would incur a lot of additional overhead, both for creating this choice point and for "trailing" subsequent bindings
go through each and every built-in predicate manually and decide on a case-by-case basis how to handle errors — while this is the most efficient in terms of runtime overheads, this is also the most costly and error-prone approach
Similar complexities are caused by exploiting WAM registers within built-ins. Again, one has the choice between a slow system or one with significant implementation overhead.
exception_handler/3
Many systems, however, provide internally better mechanisms, but few offer them consistently to the programmer. IF/Prolog provides exception_handler/3 which has the same arguments as catch/3 but handles the error or exception locally:
[user] ?- catch((arg(a,f(1),_); Z=ok), error(type_error(_,_),_), fail).
no
[user] ?- exception_handler((arg(a,f(1),_); Z=ok), error(type_error(_,_),_), fail).
Z = ok
yes
setup_call_cleanup/3
This built-in offered by quite a few systems. It is very similar to unwind-protect but requires some additional complexity due to Prolog's backtracking mechanism. See its current definition.
All these mechanisms need to be provided by the system implementor, they cannot be built on top of ISO Prolog.
You can use hypothetical reasoning, to implement what you want. Lets say
a Prolog system that allows hypothetical reasoning supports the following
inference rule:
G, A |- B
----------- (Right ->)
G |- A -> B
There are some Prolog systems that support this, for example lambda Prolog.
You can now use hypothetical reasoning to implement for example restart/2
and signal_condition/3. Assume the hypothetical reasoning is done via
(-:)/2, we could then have:
restart(Goal,Handler) :-
(handler(Handler) -: Goal).
signal_condition(Condition, Restart) :-
handler(Handler), call(Handler,Condition,Restart), !.
signal_condition(Condition, _) :-
throw(Condition).
The solution will not for nothing traverse the whole stack trace, but
directly query for a handler. But it begs the question whether I need
a special Prolog or whether I can do hypothetical reasoning by myself.
As a first approximation the (-:)/2 can be implemented as follows:
(Clause -: Goal) :- assume(Clause), Goal, retire(Clause).
assume(Clause) :- asserta(Clause).
assume(Clause) :- once(retact(Clause)).
retire(Clause) :- once(retract(Clause)).
retire(Clause) :- asserta(Clause).
But the above will not work correctly if Goal issues a cut or an
exception. So a better solution available for example in Jekejeke
Minlog 0.6 would be:
(Clause -: Goal) :- compile(Clause, Ref), assume_ref(Ref), Goal, retire_ref(Ref).
assume_ref(Ref) :- sys_atomic((recorda(Ref), sys_unbind(erase(Ref)))).
retire_ref(Ref) :- sys_atomic((erase(Ref), sys_unbind(recorda(Ref)))).
The sys_unbind/1 predicate schedules an undo goal on the binding list. It
corresponds to the undo/1 from SICStus. The binding list is resilient to
cuts. The sys_atomic/1 assures that the undo goal is always schedule, even
if an external signal happens during the execution, such as for example
an end-user issued abort. It corresponds to how for example the first argument
of setup_call_cleanup/3 is handled.
The advantage of using clause references here is that the clause is only compiled
once, even if backtracking happens between the goal and the continuation after
the (-:)/2. But otherwise the solution is most likely slower than putting
a goal on the stack trace via calling it. But one could imagine further refinements
of a Prolog system, for example (-:)/2 as a primitive and appropriate compile
techniques.
ISO prolog defines these predicates:
throw/1 which throws an exception. The argument is the exception to be thrown (any term)
catch/3 which executes a goal and catches certain exceptions in which case it executes the exception handler. First argument is the goal to be called, second argument is the exception template (if exception thrown by throw/1 unifies with this template the handler goal is executed), and the third argument is the handler goal is executed.
Example usage:
test:-
catch(my_goal, my_exception(Args), (write(exception(Args)), nl)).
my_goal:-
throw(my_exception(test)).
Regarding your note "If not, could one be built on top of existing predicates for walking and examining the call stack?" i don't think there is a general way to do this. Maybe look at the documentation of the prolog system you are using to see if there is some way to walk through the stack.
As false mentioned in his answer, ISO Prolog doesn't allow this. However, some experimentation shows that SWI-Prolog has provided a mechanism on which conditions and restarts could be built. A very rough proof of concept follows.
The "catcher" invokes restart/2 to call a goal and supplies a predicate for choosing among available restarts should a condition be raised. The "thrower" invokes signal_condition/2. The first argument is the condition to raise. The second argument will be bound to a chosen restart. If no restart is chosen, the condition becomes an exception.
restart(Goal, _) :- % signal condition finds this predicate in the call stack
call(Goal).
signal_condition(Condition, Restart) :-
prolog_current_frame(Frame),
prolog_frame_attribute(Frame, parent, Parent),
signal_handler(Parent, Condition, Restart).
signal_handler(Frame, Condition, Restart) :-
( prolog_frame_attribute(Frame, goal, restart(_, Handler)),
call(Handler, Condition, Restart)
-> true
; prolog_frame_attribute(Frame, parent, Parent)
-> signal_handler(Parent, Condition, Restart)
; throw(Condition) % reached top of call stack
).