The reverse of applying a lambda expression, eta+alfa conversion in Prolog? - prolog

There are several interesting problems with co-routing. For example we want to reclaim unreached frozen goals. But there is a problem for Prolog systems that don't support cyclic terms. Namely a freeze:
?- freeze(V, p(...V...)).
leads to a loop in the internal data structure. A simple workaround would be currying the frozen goal. Thus instead of working with a predicate freeze/2, we would work with a predicate guard/2, which could be defined as follows:
guard(V, C) :- freeze(V, call(C, V)).
But how could we define freeze/2 in terms of guard/2? The obvious definition doesn't work, since it doesn't introduce a new variable, and we still have the problem that the closure contains V (assuming a lambda library where (\)/2 is the lambda abstraction):
freeze(V, G) :- guard(V, V\G).
Bye

Related

functor vs predicate - definition for students

The question of the difference between a functor and a predicate in prolog is asked often.
I am trying to develop an informal definition that is suitable for new students.
A functor is the name of a predicate. The word functor is used when
discussing syntax, such as arity, affix type, and relative priority
over other functors. The word predicate is used when discussing
logical and procedural meaning.
This looks "good enough" to me.
Question: Is it good enough, or is it fundamentally flawed?
To be clear, I am aiming to develop a useful intuition, not write legalistic text for an ISO standard!
The definition in https://www.swi-prolog.org/pldoc/man?section=glossary is:
"functor: Combination of name and arity of a compound term. The term foo(a,b,c) is said to be a term belonging to the functor foo/3." This does not help a lot, and certainly doesn't explain the difference from a predicate, which is defined: "Collection of clauses with the same functor (name/arity). If a goal is proved, the system looks for a predicate with the same functor, then uses indexing to select candidate clauses and then tries these clauses one-by-one. See also backtracking.".
One of the things that often confuses students is that foo(a) could be a term, a goal, or a clause head, depending on the context.
One way to think about term versus predicate/goal is to treat call/1 as if it is implemented by an "infinite" number of clauses that look like this:
call(foo(X)) :- foo(X).
call(foo(X,Y)) :- foo(X,Y).
call(bar(X)) :- bar(X).
etc.
This is why you can pass around at term (which is just data) but treat it as a "goal". So, in Prolog there's no need to have a special "closure" or "thunk" or "predicate" data type - everything can be treated as just data and can be executed by use of the call/1 predicate.
(There are also variations on "call", such as call/2, which can be defined as:
call(foo, X) :- foo(X).
call(foo(X), Y) :- foo(X, Y).
etc.)
This can be used to implement "meta-predicates", such as maplist/2, which takes a list and applies a predicate to each element:
?- maplist(writeln, [one,two,three]).
one
two
three
where a naïve implementation of maplist/2 is (the actual implementation is a bit more complicated, for efficiency):
maplist(_Goal, []).
maplist(Goal, [X|Xs]) :-
call(Goal, X),
maplist(Goal, Xs).
The answer by Peter Ludemann is already very good. I want to address the following from your question:
To be clear, I am aiming to develop a useful intuition, not write legalistic text for an ISO standard!
If you want to develop intuition, don't bother writing definitions. Definitions end up being written in legalese or are useless as definitions. This is why we sometimes explain by describing how the machine will behave, this is supposedly well-defined, while any statement written in natural language is by definition ambiguous. It is interpreted by a human brain, and you have no idea what is in this brain when it interprets it. As a defense, you end up using legalese to write definitions in natural language.
You can give examples, which will leave some impression and probably develop intuition.
"The Prolog compound term a(b, c) can be described by the functor a/2. Here, a is the term name, and 2 is its arity".
"The functor foo/3 describes any term with a name foo and three arguments."
"Atomic terms by definition have arity 0: for example atoms or numbers. The atom a belongs to the functor a/0."
"You can define two predicates with the same name, as long as they have a different number of arguments."
There is also the possibility of confusion because some system predicates that allow introspection might take either a functor or the head of the predicate they work on. For example, abolish/1 takes a functor, while retractall/1 takes the predicate head.....

Why doesn't maplist/3 use a template?

The maplist/3 predicate has the following form
maplist(:Goal, ?List1, ?List2)
However the very similar function findall/3 has the form
findall(+Template, :Goal, -Bag)
Not only does it have a goal but a template as well. I've found this template to be quite useful in a number of places and began to wonder why maplist/3 doesn't have one.
Why doesn't maplist/3 have a template argument while findall/3 does? What is the salient difference between these predicates?
Templates as in findall/3, setof/3, and bagof/3 are an attempt to simulate proper quantifications with Prolog's variables. Most of the time (and here in all three cases) they involve explicit copying of those terms within the template.
For maplist/3 such mechanisms are not always necessary since the actual quantification is here about the lists' elements only. Commonly, no further modification happens. Instead of using templates, the first argument of maplist/3 is an incomplete goal that lacks two further arguments.
maplist(Goal_2, Xs, Ys).
If you insist, you can get exactly your template version using library(lambda):
templmaplist(Template1, Template2, Goal_0, Xs, Ys) :-
maplist(\Template1^Template2^Goal_0, Xs, Ys).
(Note that I avoid calling this maplist/5, since this is already defined with another meaning)
In general, I rather avoid making "my own templates" since this leads so easily to misunderstandings (already between me and me): The arguments are not the pure relational arguments one is usually expecting. By using (\)/1 instead, the local variables are somewhat better handled and more visible as being special.
... ah, and there is another good reason to rather avoid templates: They actually force you to always take into account some less-than-truly-pure mechanism as copying. This means that your program may expose some anomalies w.r.t. monotonicity. You really have to look into the very details.
On the other hand without templates, as long as there is no copying involved, even your higher-order predicates will maintain monotonicity like a charm.
Considering your concrete example will make clear why a template is not needed for maplist/3:
In maplist/N and other higher-order predicates, you can use currying to fix a particular argument.
For example, you can write the predicate:
p(Z, X, Y) :-
Z #= X + Y.
And now your example works exactly as expected without the need for a template:
?- maplist(p(1), [1,2,3,4], [0,-1,-2,-3]).
true.
You can use library(lambda) to dynamically reorder arguments, to make this even more flexible.
What is the salient difference between these predicates?
findall/3 (and family, setof/3 and bagof/3) cannot be implemented in pure Prolog (the monotonic subset without side effects), while maplist/N is simply a kind of 'macro', implementing boilerplate list(s) visit.
In maplist/N nothing is assumed about the determinacy of the predicate, since the execution flow is controlled by the list(s) pattern(s). findall/3 it's a list constructor, and it's essential the goal terminate, and (I see) a necessity to indicate what to retain of every succeeded goal invocation.

Term-expansion workflows

I'm adding library support for common term-expansion workflows (1). Currently, I have defined a "set" workflow, where sets of term-expansion rules (2) are tried until one of them succeeds, and a "pipeline" workflow, where expansion results from a set of term-expansion rules are passed to the next set in the pipeline. I wonder if there are other sensible term-expansion workflows that, even if less common, have practical uses and are thus still worth of library support.
(1) For Logtalk, the current versions can be found at:
https://github.com/LogtalkDotOrg/logtalk3/blob/master/library/hook_pipeline.lgt
https://github.com/LogtalkDotOrg/logtalk3/blob/master/library/hook_set.lgt
(2) A set of expansion rules is to be understood in this context as a set of clauses for the term_expansion/2 user-defined hook predicate (also possibly the goal_expansion/2 user-defined hook predicate, although this is less likely given the fixed point semantics used for goal-expansion) defined in a Prolog module or a Logtalk object (other than the user pseudo-module/object).
The fixpoint is both already a set and a pipeline on a certain level during expansion. Its expand_term/2 is just the transitive closure of the one step term_expansion/2 clauses. But this works only during decending into a term, in my opinion we also need something when assembling the term again.
In rare cases this transitive closure even needs the (==)/2 check as is found in some Prolog systems. Most likely it can simply stop if none of the term_expansions/2 do anything. So we have basically, without the (==)/2 check:
expand_term(X, Y) :- term_expansion(X, H), !, expand_term(H, Y).
expand_term(.. ..) :- /* possibly decend into meta predicate */
But what I would like to see is a kind of a simplification framework added to the expansion framework. So when we decend into a meta predicate and come back, we should call a simplification hook.
It is in accordance with some theories of term rewriting, which say: the normal form (nf) of a compound is a function of the normal form of its parts. The expansion framework would thus not deal with normal forms, only deliver redefinitions of predicates, but the simplification framework would do the normal form work:
nf(f(t_1,..,t_n)) --> f'(nf(t_1),..nf(t_n))
So the simplification hook would take f(nf(t_1), .., nf(t_n)), assuming that expand_term when decending into a meta predicate yields already nf(t_1) .. nf(t_n) for the meta arguments, and then simply give f(nf(t_1), .., nf(t_n)) to the simplifier.
The simplifier would then return f'(nf(t_1), .., nf(t_n)), namely do its work and return a simplified form, based on the assumption that the arguments are already simplified. Such a simplifier can be quite powerful. Jekejeke Prolog delivers such as stage after expansion.
The Jekejeke Prolog simplifier and its integraton into the expansion framework is open source here and here. It is for example used to reorder conjunction, here are the example rules for this purpose:
/* Predefined Simplifications */
sys_goal_simplification(( ( A, B), C), J) :-
sys_simplify_goal(( B, C), H),
sys_simplify_goal(( A, H), J).
Example:
(((a, b), c), d) --> (a, (b, (c, d)))
The Jekejeke Prolog simplifier is extremly efficient, since it can work with the assumption that it receives already normalized terms. It will not unnecessarely repeatedly do pattern matching over the whole given term.
But it needs some rewriting system common practice to write simplfication rules. A simplficiation rule should call simplification when ever it constructs a new term.
In the example above these are the two sys_simplify_goal/2 calls, we do for example not simply return a new term with (B,C) in it, as an expansion rule would do. Since (B,C) was not part of the normalized arguments for sys_goal_simplification/2, we have to normalize it first.
But since the simplifier framework is interwined with the expansion framework, I doubt that it can be called a workflow architecture. There is no specific flow direction, the result is rather a ping pong. Nevertheless the simplfication framework can be used in a modular way.
The Jekejeke Prolog simplifier is also used in the forward chaining clause rewriting. There it does generate from one forward clause multiple delta computation clauses.
Bye

Translating a list to another list in prolog

I Tried to write a simple code in Prolog which translate a list to another list. for instance, if we call listtrans([a,b,c],L), L will become [1,2,3]. (a,b,c is replaced with 1,2,3). But i faced with a syntax error in last line. what is the problem? here is my code:
trans(a,1).
trans(b,2).
trans(c,3).
listtrans([],L).
listtrans([H|T],L1):-
trans(H,B),
append(B,L,L2),
listtrans(T,L2).
The error is very likely because in your code:
listtrans([H|T],L1):-
trans(H,B),
append(B,L,L2),
listtrans(T,L2).
the variable L1 is declared in the head, but not referenced anywhere: you mispelled something?
Anyway, your code is not going to work.
Moreover, using append/3 for this kind of tasks which are easily defined by recursion is considered terrible (also because of the bad performance you get out of it).
Applying a function to a list is straightforward. You already know that in prolog you don't write Y = f(X) but rather declare the functional relation between X and Y as: f(X, Y).. (That's basically what you did with trans(X,Y)).
Now the (easy) recursive formulation:
the transformed empty list is the empty list
the transformation of [X|Xs] is [Y|Ys] if trans(X,Y) and we recursively transform Xs into Ys
or expressed in prolog:
listtrans([],[]).
listtrans([X|Xs],[Y|Ys]) :- trans(X,Y), listtrans(Xs,Ys).
I recommend you reading the first 4 chapters of Learn Prolog Now to better understand these concepts.
There are no syntactic errors, only semantic ones. This is it, with corrections:
listtrans([],[]).
listtrans([H|T],L1):-
trans(H,B),
append([B],L2,L1),
listtrans(T,L2).
But this is not Prolog style. One writes rather:
listtrans([],[]).
listtrans([A|As],[I|Is]):-
trans(A,I),
listtrans(As,Is).
Note that in Prolog explicitly appending elements is much rarer than in languages supporting functions. As an extra bonus, you can now use the relation in both directions:
?- listtrans([a,b,c],Is).
Is = [1,2,3].
?- listtrans(As, [1,2,3]).
As = [a,b,c].
And you can write this more compactly:
listtrans(As, Is) :-
maplist(trans, As, Is).
See this for more.

How to avoid using assert and retractall in Prolog to implement global (or state) variables

I often end up writing code in Prolog which involves some arithmetic calculation (or state information important throughout the program), by means of first obtaining the value stored in a predicate, then recalculating the value and finally storing the value using retractall and assert because in Prolog we cannot assign values to variable twice using is (thus making almost every variable that needs modification, global). I have come to know that this is not a good practice in Prolog. In this regard I would like to ask:
Why is it a bad practice in Prolog (though i myself don't like to go through the above mentioned steps just to have have a kind of flexible (modifiable) variable)?
What are some general ways to avoid this practice? Small examples will be greatly appreciated.
P.S. I just started learning Prolog. I do have programming experience in languages like C.
Edited for further clarification
A bad example (in win-prolog) of what I want to say is given below:
:- dynamic(value/1).
:- assert(value(0)).
adds :-
value(X),
NewX is X + 4,
retractall(value(_)),
assert(value(NewX)).
mults :-
value(Y),
NewY is Y * 2,
retractall(value(_)),
assert(value(NewY)).
start :-
retractall(value(_)),
assert(value(3)),
adds,
mults,
value(Q),
write(Q).
Then we can query like:
?- start.
Here, it is very trivial, but in real program and application, the above shown method of global variable becomes unavoidable. Sometimes the list given above like assert(value(0))... grows very long with many more assert predicates for defining more variables. This is done to make communication of the values between different functions possible and to store states of variables during the runtime of program.
Finally, I'd like to know one more thing:
When does the practice mentioned above become unavoidable in spite of various solutions suggested by you to avoid it?
The general way to avoid this is to think in terms of relations between states of your computations: You use one argument to hold the state that is relevant to your program before a calculation, and a second argument that describes the state after some calculation. For example, to describe a sequence of arithmetic operations on a value V0, you can use:
state0_state(V0, V) :-
operation1_result(V0, V1),
operation2_result(V1, V2),
operation3_result(V2, V).
Notice how the state (in your case: the arithmetic value) is threaded through the predicates. The naming convention V0 -> V1 -> ... -> V scales easily to any number of operations and helps to keep in mind that V0 is the initial value, and V is the value after the various operations have been applied. Each predicate that needs to access or modify the state will have an argument that allows you to pass it the state.
A huge advantage of threading the state through like this is that you can easily reason about each operation in isolation: You can test it, debug it, analyze it with other tools etc., without having to set up any implicit global state. As another huge benefit, you can then use your programs in more directions provided you are using sufficiently general predicates. For example, you can ask: Which initial values lead to a given outcome?
?- state0_state(V0, given_outcome).
This is of course not readily possible when using the imperative style. You should therefore use constraints instead of is/2, because is/2 only works in one direction. Constraints are much easier to use and a more general modern alternative to low-level arithmetic.
The dynamic database is also slower than threading states through in variables, because it performs indexing etc. on each assertz/1.
1 - it's bad practice because destroys the declarative model that (pure) Prolog programs exhibit.
Then the programmer must think in procedural terms, and the procedural model of Prolog is rather complicate and difficult to follow.
Specifically, we must be able to decide about the validity of asserted knowledge while the programs backtracks, i.e. follow alternative paths to those already tried, that (maybe) caused the assertions.
2 - We need additional variables to keep the state. A practical, maybe not very intuitive way, is using grammar rules (a DCG) instead of plain predicates. Grammar rules are translated adding two list arguments, normally hidden, and we can use those arguments to pass around the state implicitly, and reference/change it only where needed.
A really interesting introduction is here: DCGs in Prolog by Markus Triska. Look for Implicitly passing states around: you'll find this enlighting small example:
num_leaves(nil), [N1] --> [N0], { N1 is N0 + 1 }.
num_leaves(node(_,Left,Right)) -->
num_leaves(Left),
num_leaves(Right).
More generally, and for further practical examples, see Thinking in States, from the same author.
edit: generally, assert/retract are required only if you need to change the database, or keep track of computation result along backtracking. A simple example from my (very) old Prolog interpreter:
findall_p(X,G,_):-
asserta(found('$mark')),
call(G),
asserta(found(X)),
fail.
findall_p(_,_,N) :-
collect_found([],N),
!.
collect_found(S,L) :-
getnext(X),
!,
collect_found([X|S],L).
collect_found(L,L).
getnext(X) :-
retract(found(X)),
!,
X \= '$mark'.
findall/3 can be seen as the basic all solutions predicate. That code should be the very same from Clockins-Mellish textbook - Programming in Prolog. I used it while testing the 'real' findall/3 I implemented. You can see that it's not 'reentrant', because of the '$mark' aliased.

Resources