Is it possible to write a prolog interpreter that avoids infinite recursion? - prolog

Main features
I recently have been looking to make a Prolog meta-interpreter with a certain set of features, but I am starting to see that I don't have the theoretical knowledge to work on it.
The features are as follows :
Depth-first search.
Interprets any non-recursive Prolog program the same way a classic interpreter would.
Guarantees breaking out of any infinite recursion. This most-likely means breaking Turing-completeness, and I'm okay with that.
As long as each step of the recursion reduces the complexity of the expression, keep evaluating it. To be more specific, I want predicates to be allowed to call themselves, but I want to prevent a clause to be able to call a similarly or more complex version of itself.
Obviously, (3) and (4) are the ones I am having problems with. I am not sure if those 2 features are compatible. I am not even sure if there is a way to define complexity such that (4) makes logical sense.
In my researches, I have come across the concept of "unavoidable pattern", which, I believe, provides a way to ensure feature (3), as long as feature (4) has a well-formed definition.
I specifically want to know if this kind of interpreter has been proven impossible, and, if not, if theoretical or concrete work on similar interpreters has been done in the past.
Extra features
Provided the above features are possible to implement, I have extra features I want to add, and would be grateful if you could enlighten me on the feasibility of such features as well :
Systematically characterize and describe those recursions, such that, when one is detected, a user-defined predicate or clause could be called that matches this specific form of recursion.
Detect patterns that result in an exponential number of combinatorial choices, prevent evaluation, and characterize them in the same way as step (5), such that they can be handled by a built-in or user-defined predicate.
Example
Here is a simple predicate that obviously results in infinite recursion in a normal Prolog interpreter in all but the simplest of cases. This interpreter should be able to evaluate it in at most PSPACE (and, I believe, at most P if (6) is possible to implement), while still giving relevant results.
eq(E, E).
eq(E, F):- eq(E,F0), eq(F0,F).
eq(A + B, AR + BR):- eq(A, AR), eq(B, BR).
eq(A + B, B + A).
eq(A * B, B * A).
eq((A * B) / B, A).
eq(E, R):- eq(R, E).
Example of results expected :
?- eq((a + c) + b, b + (c + a)).
true.
?- eq(a, (a * b) / C).
C = b.
The fact that this kind of interpreter might prove useful by the provided example hints me towards the fact that such an interpreter is probably impossible, but, if it is, I would like to be able to understand what makes it impossible (for example, does (3) + (4) reduce to the halting problem? is (6) NP?).

If you want to guarantee termination you can conservatively assume any input goal is nonterminating until proven otherwise, using a decidable proof procedure. Basically, define some small class of goals which you know halt, and expand it over time with clever ideas.
Here are three examples, which guarantee or force three different kinds of termination respectively (also see the Power of Prolog chapter on termination):
existential-existential: at least one answer is reached before potentially diverging
universal-existential: no branches diverge but there may be an infinite number of them, so the goal may not be universally terminating
universal-universal: after a finite number of steps, every answer will be reached, so in particular there must be a finite number of answers
In the following, halts(Goal) is assumed to correctly test a goal for existential-existential termination.
Existential-Existential
This uses halts/1 to prove existential termination of a modest class of goals. The current evaluator eval/1 just falls back to the underlying engine:
halts(halts(_)).
eval(Input) :- Input.
:- \+ \+ halts(halts(eval(_))).
safe_eval(Input) :-
halts(eval(Input)),
eval(Input).
?- eval((repeat, false)).
C-c C-cAction (h for help) ? a
abort
% Execution Aborted
?- safe_eval((repeat, false)).
false.
The optional but highly recommended goal directive \+ \+ halts(halts(eval(_))) ensures that halts will always halt when run on eval applied to anything.
The advantage of splitting the problem into a termination checker and an evaluator is that the two are decoupled: you can use any evaluation strategy you want. halts can be gradually augmented with more advanced methods to expand the class of allowed goals, which frees up eval to do the same, e.g. clause reordering based on static/runtime mode analysis, propagating failure, etc. eval can itself expand the class of allowed goals by improving termination properties which are understood by halts.
One caveat - inputs which use meta-logical predicates like var/1 could subvert the goal directive. Maybe just disallow such predicates at first, and again relax the restriction over time as you discover safe patterns of use.
Universal-Existential
This example uses a meta-interpreter, adapted from the Power of Prolog chapter on meta-interpreters, to prune off branches which can't be proven to existentially terminate:
eval(true).
eval((A,B)) :- eval(A), eval(B).
eval((A;_)) :- halts(eval(A)), eval(A).
eval((_;B)) :- halts(eval(B)), eval(B).
eval(g(Head)) :-
clause(Head, Body),
halts(eval(Body)),
eval(Body).
So here we're destroying branches, rather than refusing to evaluate the goal.
For improved efficiency, you could start by naively evaluating the input goal and building up per-branch sets of visited clause bodies (using e.g. clause/3), and only invoke halts when you are about to revisit a clause in the same branch.
Universal-Universal
The above meta-interpreter rules out at least all the diverging branches, but may still have an infinite number of individually terminating branches. If we want to ensure universal termination we can again do everything before entering eval, as in the existential-existential variation:
...
:- \+ \+ halts(halts(\+ \+ eval(_))).
...
safe_eval(Input) :-
halts(\+ \+ eval(Input)),
eval(Input).
So we're just adding in universal quantification.
One interesting thing you could try is running halts itself using eval. This could yield speedups, better termination properties, or qualitatively new capabilities, but would of course require the goal directive and halts to be written according to eval's semantics. E.g. if you remove double negations then \+ \+ would not universally quantify, and if you propagate false or otherwise don't conform to the default left-to-right strategy then the (goal, false) test for universal termination (PoP chapter on termination) also would not work.

Related

SWI-Prolog: Understanding Infinite Loops

I am currently trying to understand the basics of prolog.
I have a knowledge base like this:
p(a).
p(X) :- p(X).
If I enter the query p(b), the unification with the fact fails and the rule p(X) :- p(X) is used which leads the unification with the fact to fail again. Why is the rule applied over and over again after that? Couldn't prolog return false at this point?
After a certain time I get the message "Time limit exceeded".
I'm not quite sure why prolog uses the rule over and over again, but since it is, I don't understand why I get a different error message as in the following case.
To be clear, I do understand that "p(X) if p(X)" is an unreasonable rule, but I would like to understand what exactly happens there.
If I have a knowledge base like this:
p(X) :- p(X).
p(a).
There is no chance to come to a result even with p(a) because the fact is below the rule and the rule is called over and over again. For this variant I receive a different error message almost instantly "ERROR: Out of local stack" which is comprehensible.
Now my question - what is the difference between those cases?
Why am I receiving different error messages and why is prolog not returning false after the first application of the rule in the above case? My idea would be that in the above case the procedure is kind of restarted each time the rule gets called and in the below case the same procedure calls the rule over and over again. I would be grateful if somebody could elaborate this.
Update: If I query p(a). to the 2nd KB as said I receive "Out of local stack", but if I query p(b). to the same KB I get "Time limit exceeded". This is even more confusing to me, shouldn't the constant be irrelevant for the infinite loop?
Let us first consider the following program fragment that both examples have in common:
p(X) :- p(X).
As you correctly point out, it is obvious that no particular solutions are described by this fragment in isolation. Declaratively, we can read it as: "p(X) holds if p(X) holds". OK, so we cannot deduce any concrete solution from only this clause.
This explains why p(b) cannot hold if only this fragment is considered. Additionally, p(a) does not imply p(b) either, so no matter where you put the fact p(a), you will never derive p(b) from these two clauses.
Procedurally, Prolog still attempts to find cases where p(X) holds. So, if you post ?- p(X). as a query, Prolog will try to find a resolution refutation, disregarding what it has "already tried". For this reason, it will try to prove p(X) over and over. Prolog's default resolution strategy, SLDNF resolution, keeps no memory of which branches have already been tried, and also for this reason can be implemented very efficiently, with little overhead compared to other programming languages.
The difference between an infinite deduction attempt and an out of local stack error error can only be understood procedurally, by taking into account how Prolog executes these fragments.
Prolog systems typically apply an optimization that is called tail call optimization. This is applicable if no more choice-points remain, and means that it can discard (or reuse) existing stack frames.
The key difference between your two examples is obviously where you add the fact: Either before or after the recursive clause.
In your case, if the recursive clause comes last, then no more choice-points remain at the time the goal p(X) is invoked. For this reason, an existing stack frame can be reused or discarded.
On the other hand, if you write the recursive clause first, and then query ?- q(X). (or ?- q(a).), then both clauses are applicable, and Prolog remembers this by creating a choice-point. When the recursive goal is invoked, the choice-point still exists, and therefore the stack frames pile up until they exceed the available limits.
If you query ?- p(b)., then argument indexing detects that p(a) is not applicable, and again only the recursive clause applies, independent of whether you write it before or after the fact. This explains the difference between querying p(X) (or p(a)) and p(b) (or other queries). Note that Prolog implementations differ regarding the strength of their indexing mechanisms. In any case, you should expect your Prolog system to index at least on the outermost functor and arity of the first argument. If necessary, more complex indexing schemes can be constructed manually on top of this mechanism. Modern Prolog systems provide JIT indexing, deep indexing and other mechanisms, and so they often automatically detect the exact subset of clauses that are applicable.
Note that there is a special form of resolution called SLG resolution, which you can use to improve termination properties of your programs in such cases. For example, in SWI-Prolog, you can enable SLG resolution by adding the following directives before your program:
:- use_module(library(tabling)).
:- table p/1.
With these directives, we obtain:
?- p(X).
X = a.
?- p(b).
false.
This coincides with the declarative semantics you expect from your definitions. Several other Prolog systems provide similar facilities.
It should be easy to grasp the concept of infinite loop by studying how standard repeat/0 is implemented:
repeat.
repeat :- repeat.
This creates an infinite number of choice points. First clause, repeat., simply allows for a one time execution. The second clause, repeat :- repeat. makes it infinitely deep recursion.
Adding any number of parameters:
repeat(_, _, ..., _).
repeat(Param1, Param2, ..., ParamN) :- repeat(Param1, Param2, ..., ParamN).
You may have bodies added to these clauses and have parameters of the first class having meaningful names depending on what you are trying to archive. If bodies won't contain cuts, direct or inherited from predicates used, this will be an infinite loop too just as repeat/0.

Does `setup_call_cleanup/3` detect determinism portably?

There is a very detailed Draft proposal for setup_call_cleanup/3.
Let me quote the relevant part for my question:
c) The cleanup handler is called exactly once; no later than upon failure of G. Earlier moments are:
If G is true or false, C is called at an implementation dependent moment after the last solution and after the last observable effect of G.
And this example:
setup_call_cleanup(S=1,G=2,write(S+G)).
Succeeds, unifying S = 1, G = 2.
Either: outputs '1+2'
Or: outputs on backtracking '1+_' prior to failure.
Or (?): outputs on backtracking '1+2' prior to failure.
In my understanding, this is basically because a unification is a
backtrackable goal; it simply fails on reexecution. Therefore, it is
up to the implementation to decide whether to call the cleanup just
after the fist execution (because there will be no more observable
effects of the goal), or postpone until the second execution of the
goal which now fails.
So it seems to me that this cannot be used to detect determinism
portably. Only a few built-in constructs like true, fail, !
etc. are genuinely non-backtrackable.
Is there any other way to check determinism without executing a goal twice?
I'm currently using SWI-prolog's deterministic/1, but I'd certainly
appreciate portability.
No. setup_call_cleanup/3 cannot detect determinism in a portable manner. For this would restrict an implementation in its freedom. Systems have different ways how they implement indexing. They have different trade-offs. Some have only first argument indexing, others have more than that. But systems which offer "better" indexing behave often quite randomly. Some systems do indexing only for nonvar terms, others also permit clauses that simply have a variable in the head - provided it is the last clause. Some may do "manual" choice point avoidance with safe tests prior to cuts and others just omit this. Brief, this is really a non-functional issue, and insisting on portability in this area is tantamount to slowing systems down.
However, what still holds is this: If setup_call_cleanup/3 detects determinism, then there is no further need to use a second goal to determine determinism! So it can be used to implement determinism detection more efficiently. In the general case, however, you will have to execute a goal twice.
The current definition of setup_call_cleanup/3 was also crafted such as to permit an implementation to also remove unnecessary choicepoints dynamically.
It is thinkable (not that I have seen such an implementation) that upon success of Call and the internal presence of a choicepoint, an implementation may examine the current choicepoints and remove them if determinism could be detected. Another possibility might be to perform some asynchronous garbage collection in between. All these options are not excluded by the current specification. It is unclear if they will ever be implemented, but it might happen, once some applications depend on such a feature. This has happened in Prolog already a couple of times, so a repetition is not completely fantasy. In fact I am thinking of a particular case to help DCGs to become more determinate. Who knows, maybe you will go that path!
Here is an example how indexing in SWI depends on the history of previous queries:
?- [user].
p(_,a). p(_,b). end_of_file.
true.
?- p(1,a).
true ;
false.
?- p(_,a).
true.
?- p(1,a).
true. % now, it's determinate!
Here is an example, how indexing on the second argument is strictly weaker than first argument indexing:
?- [user].
q(_,_). q(0,0). end_of_file.
true.
?- q(X,1).
true ; % weak
false.
?- q(1,X).
true.
As you're interested in portability, Logtalk's lgtunit tool defines a portable deterministic/1 predicate for 10 of its supported backend Prolog compilers:
http://logtalk.org/library/lgtunit_0.html
https://github.com/LogtalkDotOrg/logtalk3/blob/master/tools/lgtunit/lgtunit.lgt (starting around line 1051)
Note that different systems use different built-in predicates that approximate the intended functionality.

Term-expansion workflows

I'm adding library support for common term-expansion workflows (1). Currently, I have defined a "set" workflow, where sets of term-expansion rules (2) are tried until one of them succeeds, and a "pipeline" workflow, where expansion results from a set of term-expansion rules are passed to the next set in the pipeline. I wonder if there are other sensible term-expansion workflows that, even if less common, have practical uses and are thus still worth of library support.
(1) For Logtalk, the current versions can be found at:
https://github.com/LogtalkDotOrg/logtalk3/blob/master/library/hook_pipeline.lgt
https://github.com/LogtalkDotOrg/logtalk3/blob/master/library/hook_set.lgt
(2) A set of expansion rules is to be understood in this context as a set of clauses for the term_expansion/2 user-defined hook predicate (also possibly the goal_expansion/2 user-defined hook predicate, although this is less likely given the fixed point semantics used for goal-expansion) defined in a Prolog module or a Logtalk object (other than the user pseudo-module/object).
The fixpoint is both already a set and a pipeline on a certain level during expansion. Its expand_term/2 is just the transitive closure of the one step term_expansion/2 clauses. But this works only during decending into a term, in my opinion we also need something when assembling the term again.
In rare cases this transitive closure even needs the (==)/2 check as is found in some Prolog systems. Most likely it can simply stop if none of the term_expansions/2 do anything. So we have basically, without the (==)/2 check:
expand_term(X, Y) :- term_expansion(X, H), !, expand_term(H, Y).
expand_term(.. ..) :- /* possibly decend into meta predicate */
But what I would like to see is a kind of a simplification framework added to the expansion framework. So when we decend into a meta predicate and come back, we should call a simplification hook.
It is in accordance with some theories of term rewriting, which say: the normal form (nf) of a compound is a function of the normal form of its parts. The expansion framework would thus not deal with normal forms, only deliver redefinitions of predicates, but the simplification framework would do the normal form work:
nf(f(t_1,..,t_n)) --> f'(nf(t_1),..nf(t_n))
So the simplification hook would take f(nf(t_1), .., nf(t_n)), assuming that expand_term when decending into a meta predicate yields already nf(t_1) .. nf(t_n) for the meta arguments, and then simply give f(nf(t_1), .., nf(t_n)) to the simplifier.
The simplifier would then return f'(nf(t_1), .., nf(t_n)), namely do its work and return a simplified form, based on the assumption that the arguments are already simplified. Such a simplifier can be quite powerful. Jekejeke Prolog delivers such as stage after expansion.
The Jekejeke Prolog simplifier and its integraton into the expansion framework is open source here and here. It is for example used to reorder conjunction, here are the example rules for this purpose:
/* Predefined Simplifications */
sys_goal_simplification(( ( A, B), C), J) :-
sys_simplify_goal(( B, C), H),
sys_simplify_goal(( A, H), J).
Example:
(((a, b), c), d) --> (a, (b, (c, d)))
The Jekejeke Prolog simplifier is extremly efficient, since it can work with the assumption that it receives already normalized terms. It will not unnecessarely repeatedly do pattern matching over the whole given term.
But it needs some rewriting system common practice to write simplfication rules. A simplficiation rule should call simplification when ever it constructs a new term.
In the example above these are the two sys_simplify_goal/2 calls, we do for example not simply return a new term with (B,C) in it, as an expansion rule would do. Since (B,C) was not part of the normalized arguments for sys_goal_simplification/2, we have to normalize it first.
But since the simplifier framework is interwined with the expansion framework, I doubt that it can be called a workflow architecture. There is no specific flow direction, the result is rather a ping pong. Nevertheless the simplfication framework can be used in a modular way.
The Jekejeke Prolog simplifier is also used in the forward chaining clause rewriting. There it does generate from one forward clause multiple delta computation clauses.
Bye

Making "deterministic success" of Prolog goals explicit

The matter of deterministic success of some Prolog goal has turned up time and again in—at least—the following questions:
Reification of term equality/inequality
Intersection and union of 2 lists
Remove duplicates in list (Prolog)
Prolog: How can I implement the sum of squares of two largest numbers out of three?
Ordering lists with constraint logic programming)
Different methods were used (e.g., provoking certain resource errors, or looking closely at the exact answers given by the Prolog toplevel), but they all appear somewhat ad-hack to me.
I'm looking for a generic, portable, and ISO-conformant way to find out if the execution of some Prolog goal (which succeeded) left some choice-point(s) behind. Some meta predicate, maybe?
Could you please hint me in the right direction? Thank you in advance!
Good news everyone: setup_call_cleanup/3 (currently a draft proposal for ISO) lets you do that in a quite portable and beautiful way.
See the example:
setup_call_cleanup(true, (X=1;X=2), Det=yes)
succeeds with Det == yes when there are no more choice points left.
EDIT: Let me illustrate the awesomeness of this construct, or rather of the very closely related predicate call_cleanup/2, with a simple example:
In the excellent CLP(B) documentation of SICStus Prolog, we find in the description of labeling/1 a very strong guarantee:
Enumerates all solutions by backtracking, but creates choicepoints only if necessary.
This is really a strong guarantee, and at first it may be hard to believe that it always holds. Luckily for us, it is extremely easy to formulate and generate systematic test cases in Prolog to verify such properties, in essence using the Prolog system to test itself.
We start with systematically describing what a Boolean expression looks like in CLP(B):
:- use_module(library(clpb)).
:- use_module(library(lists)).
sat(_) --> [].
sat(a) --> [].
sat(~_) --> [].
sat(X+Y) --> [_], sat(X), sat(Y).
sat(X#Y) --> [_], sat(X), sat(Y).
There are in fact many more cases, but let us restrict ourselves to the above subset of CLP(B) expressions for now.
Why am I using a DCG for this? Because it lets me conveniently describe (a subset of) all Boolean expressions of specific depth, and thus fairly enumerate them all. For example:
?- length(Ls, _), phrase(sat(Sat), Ls).
Ls = [] ;
Ls = [],
Sat = a ;
Ls = [],
Sat = ~_G475 ;
Ls = [_G475],
Sat = _G478+_G479 .
Thus, I am using the DCG only to denote how many available "tokens" have already been consumed when generating expressions, limiting the total depth of the resulting expressions.
Next, we need a small auxiliary predicate labeling_nondet/1, which acts exactly as labeling/1, but is only true if a choice-point still remains. This is where call_cleanup/2 comes in:
labeling_nondet(Vs) :-
dif(Det, true),
call_cleanup(labeling(Vs), Det=true).
Our test case (and by this, we actually mean an infinite sequence of small test cases, which we can very conveniently describe with Prolog) now aims to verify the above property, i.e.:
If there is a choice-point, then there is a further solution.
In other words:
The set of solutions of labeling_nondet/1 is a proper subset of that of labeling/1.
Let us thus describe what a counterexample of the above property looks like:
counterexample(Sat) :-
length(Ls, _),
phrase(sat(Sat), Ls),
term_variables(Sat, Vs),
sat(Sat),
setof(Vs, labeling_nondet(Vs), Sols),
setof(Vs, labeling(Vs), Sols).
And now we use this executable specification in order to find such a counterexample. If the solver works as documented, then we will never find a counterexample. But in this case, we immediately get:
| ?- counterexample(Sat).
Sat = a+ ~_A,
sat(_A=:=_B*a) ? ;
So in fact the property does not hold. Broken down to the essence, although no more solutions remain in the following query, Det is not unified with true:
| ?- sat(a + ~X), call_cleanup(labeling([X]), Det=true).
X = 0 ? ;
no
In SWI-Prolog, the superfluous choice-point is obvious:
?- sat(a + ~X), labeling([X]).
X = 0 ;
false.
I am not giving this example to criticize the behaviour of either SICStus Prolog or SWI: Nobody really cares whether or not a superfluous choice-point is left in labeling/1, least of all in an artificial example that involves universally quantified variables (which is atypical for tasks in which one uses labeling/1).
I am giving this example to show how nicely and conveniently guarantees that are documented and intended can be tested with such powerful inspection predicates...
... assuming that implementors are interested to standardize their efforts, so that these predicates actually work the same way across different implementations! The attentive reader will have noticed that the search for counterexamples produces quite different results when used in SWI-Prolog.
In an unexpected turn of events, the above test case has found a discrepancy in the call_cleanup/2 implementations of SWI-Prolog and SICStus. In SWI-Prolog (7.3.11):
?- dif(Det, true), call_cleanup(true, Det=true).
dif(Det, true).
?- call_cleanup(true, Det=true), dif(Det, true).
false.
whereas both queries fail in SICStus Prolog (4.3.2).
This is the quite typical case: Once you are interested in testing a specific property, you find many obstacles that are in the way of testing the actual property.
In the ISO draft proposal, we see:
Failure of [the cleanup goal] is ignored.
In the SICStus documentation of call_cleanup/2, we see:
Cleanup succeeds determinately after performing some side-effect; otherwise, unexpected behavior may result.
And in the SWI variant, we see:
Success or failure of Cleanup is ignored
Thus, for portability, we should actually write labeling_nondet/1 as:
labeling_nondet(Vs) :-
call_cleanup(labeling(Vs), Det=true),
dif(Det, true).
There is no guarantee in setup_call_cleanup/3 that it detects determinism, i.e. missing choice points in the success of a goal. The 7.8.11.1 Description draft proposal only says:
c) The cleanup handler is called exactly once; no later than
upon failure of G. Earlier moments are:
If G is true or false, C is called at an implementation
dependent moment after the last solution and after the last
observable effect of G.
So there is currently no requirement that:
setup_call_cleanup(true, true, Det=true)
Returns Det=true in the first place. This is also reflected in the test cases 7.8.11.4 Examples that the draf proposal gives, we find one test case which says:
setup_call_cleanup(true, true, X = 2).
Either: Succeeds, unifying X = 2.
Or: Succeeds.
So its both a valid implementation, to detect determinism and not to detect determinism.

How to avoid using assert and retractall in Prolog to implement global (or state) variables

I often end up writing code in Prolog which involves some arithmetic calculation (or state information important throughout the program), by means of first obtaining the value stored in a predicate, then recalculating the value and finally storing the value using retractall and assert because in Prolog we cannot assign values to variable twice using is (thus making almost every variable that needs modification, global). I have come to know that this is not a good practice in Prolog. In this regard I would like to ask:
Why is it a bad practice in Prolog (though i myself don't like to go through the above mentioned steps just to have have a kind of flexible (modifiable) variable)?
What are some general ways to avoid this practice? Small examples will be greatly appreciated.
P.S. I just started learning Prolog. I do have programming experience in languages like C.
Edited for further clarification
A bad example (in win-prolog) of what I want to say is given below:
:- dynamic(value/1).
:- assert(value(0)).
adds :-
value(X),
NewX is X + 4,
retractall(value(_)),
assert(value(NewX)).
mults :-
value(Y),
NewY is Y * 2,
retractall(value(_)),
assert(value(NewY)).
start :-
retractall(value(_)),
assert(value(3)),
adds,
mults,
value(Q),
write(Q).
Then we can query like:
?- start.
Here, it is very trivial, but in real program and application, the above shown method of global variable becomes unavoidable. Sometimes the list given above like assert(value(0))... grows very long with many more assert predicates for defining more variables. This is done to make communication of the values between different functions possible and to store states of variables during the runtime of program.
Finally, I'd like to know one more thing:
When does the practice mentioned above become unavoidable in spite of various solutions suggested by you to avoid it?
The general way to avoid this is to think in terms of relations between states of your computations: You use one argument to hold the state that is relevant to your program before a calculation, and a second argument that describes the state after some calculation. For example, to describe a sequence of arithmetic operations on a value V0, you can use:
state0_state(V0, V) :-
operation1_result(V0, V1),
operation2_result(V1, V2),
operation3_result(V2, V).
Notice how the state (in your case: the arithmetic value) is threaded through the predicates. The naming convention V0 -> V1 -> ... -> V scales easily to any number of operations and helps to keep in mind that V0 is the initial value, and V is the value after the various operations have been applied. Each predicate that needs to access or modify the state will have an argument that allows you to pass it the state.
A huge advantage of threading the state through like this is that you can easily reason about each operation in isolation: You can test it, debug it, analyze it with other tools etc., without having to set up any implicit global state. As another huge benefit, you can then use your programs in more directions provided you are using sufficiently general predicates. For example, you can ask: Which initial values lead to a given outcome?
?- state0_state(V0, given_outcome).
This is of course not readily possible when using the imperative style. You should therefore use constraints instead of is/2, because is/2 only works in one direction. Constraints are much easier to use and a more general modern alternative to low-level arithmetic.
The dynamic database is also slower than threading states through in variables, because it performs indexing etc. on each assertz/1.
1 - it's bad practice because destroys the declarative model that (pure) Prolog programs exhibit.
Then the programmer must think in procedural terms, and the procedural model of Prolog is rather complicate and difficult to follow.
Specifically, we must be able to decide about the validity of asserted knowledge while the programs backtracks, i.e. follow alternative paths to those already tried, that (maybe) caused the assertions.
2 - We need additional variables to keep the state. A practical, maybe not very intuitive way, is using grammar rules (a DCG) instead of plain predicates. Grammar rules are translated adding two list arguments, normally hidden, and we can use those arguments to pass around the state implicitly, and reference/change it only where needed.
A really interesting introduction is here: DCGs in Prolog by Markus Triska. Look for Implicitly passing states around: you'll find this enlighting small example:
num_leaves(nil), [N1] --> [N0], { N1 is N0 + 1 }.
num_leaves(node(_,Left,Right)) -->
num_leaves(Left),
num_leaves(Right).
More generally, and for further practical examples, see Thinking in States, from the same author.
edit: generally, assert/retract are required only if you need to change the database, or keep track of computation result along backtracking. A simple example from my (very) old Prolog interpreter:
findall_p(X,G,_):-
asserta(found('$mark')),
call(G),
asserta(found(X)),
fail.
findall_p(_,_,N) :-
collect_found([],N),
!.
collect_found(S,L) :-
getnext(X),
!,
collect_found([X|S],L).
collect_found(L,L).
getnext(X) :-
retract(found(X)),
!,
X \= '$mark'.
findall/3 can be seen as the basic all solutions predicate. That code should be the very same from Clockins-Mellish textbook - Programming in Prolog. I used it while testing the 'real' findall/3 I implemented. You can see that it's not 'reentrant', because of the '$mark' aliased.

Resources