How is predicate logic represented in Prolog? - prolog

may be a strange and broad question and not a 100% programming question, but I hope this is ok. I recently had a discussion about, that a lot of programs in Prolog don´t follow strict predicate logic (of Frege) but often are "object oriented" which I am trying to grasp.
I know that Prolog is based on first order predicate logic especially Horn Clauses and that they are a special form of modus ponens. A fact and a rule if they occur solo are simply clauses, but as soon as I add more than one occurrence they become a predicate.
How are the quantors of first order predicate logic represented and related to fact , rule , predicate or the Prolog concept in general? What does the functor express and what the arguments in relation to predicate logic. How is predicate logic and first order predicate logic reflected in Prolog and where does prolog leave their concepts? e.g. how would I define a point, a line and a vertical line in predicate logic and first order predicate logic.
How do I formulate this in predicate logic and first order predicate logic what is the semantic and logic difference between
vertical(line).
line(vertical).
Or a line and point in this example. Are point and line not predicate logic?
For me it is " point(X) the set of all points" and when I pick a concrete point "there exists one point(110, 12)."
point(X,Y).
line(point(W,X), point(Y,Z)).
vertical(line(point(X,Y), point(X,Z))).
horizontal(line(point(X,Y), point(Z,Y))).
Any info helps! Many thanks, H

A chapter of Programming in Prolog by W.Clocksin and C.Mellish is devoted to explain the relation of Prolog with logic. Citing from there
If we wish to discuss how Prolog is related to logic, we must first establish what we
mean by logic. Logic was originally devised as a way of representing the form of
arguments, so that it would be possible to check in a formal way whether or not they
are valid. Thus we can use logic to express propositions, the relations between propositions and how one can validly infer some propositions from others. The particular
form of logic that we will be talking about here is called the Predicate Calculus. We
will only be able to say a few words about it here. There are scores of good basic
introductions to logic you can turn to for background reading.
If we wish to express propositions about the world, we must be able to describe
the objects that are involved in them. In Predicate Calculus, we represent objects by
terms. A term is of one of the following forms:
A constant symbol. This is a symbol that stands for a single individual or concept.
We can think of this as a Prolog atom, and we will use the Prolog syntax. So
greek, agatha, and peace are constant symbols.
A variable symbol. This is a symbol that we may want to stand for different
individuals at different times. Variables are really only introduced in conjunction
with quantifiers, which are discussed below. We can think of them as Prolog
variables and will use the Prolog syntax. Thus X, Man, and Greek are variable
symbols.
A compound term. A compound term consists of a function symbol, together
with an ordered set of terms as its arguments. The idea is that the compound
term represents some individual that depends on the individuals represented by
the arguments. The function symbol represents how the first depends on the second. For instance, we could have a function symbol standing for the notion of
"distance" and two arguments. In this case, the compound term stands for the
distance between the objects represented by the arguments. We can think of a
compound term as a Prolog structure with the function symbol as the functor.
We will write Predicate Calculus compound terms using the Prolog syntax, so
that, for instance, wife(henry) might mean Henry's wife, distance(point1, X)
might mean the distance between some particular point and some other place to
be specified, and classes(mary, dayafter(W)) might mean the classes that Mary
teaches on the day after some day W to be specified.
Thus in Predicate Calculus the ways of representing objects are just like the ways available in Prolog.
Seems not appropriate to put the entire chapter here... there is also a program, very explanatory, in appendix B, that performs an automatic translation of WFFs into clauses.
The book is very readable, just a pity it's not among the titles in Free Prolog Programming Books section.

I know that Prolog is based on first order predicate logic especially Horn Clauses and that they are a special form of modus ponens.
In a sense, inverse "modus ponens":
a :- b
You want to show "a true", and to do so, you have to show "b true"
A fact and a rule if they occur solo are simply clauses, but as soon as I add more than one occurrence they become a predicate.
No, they are all predicates. The "predicate" is an object/agent/program/platonic-phenomenon which expresses that there (objectively) is some "relationship" between "things", and you can ask the Prolog Processor about that relationship. There is no direct meaning associated to all of that though, it's "strings related to strings via strings". We are working with syntactic machines after all (i.e. computers).
Enter this logic program:
p(x,y). % Predicate p/2 states that there is a relationship p between x and y
And now, you can query the database about what the program is saying:
?- p(x,y).
true. % a p relationship exists (fact, but could also be rule)
?- p(x,A).
A = y. % the thing related to x via p is y
?- p(A,y).
A = x. % the thing related to x via p is y
?- p(A,B).
A = x, % things related via p are x and y
B = y.
?- p(c,d).
false. % not REALLY "false" but "as far as I can tell, there
% is no relationship p between c and d"
Note the interpretation of "false", which is not the "strong false" of classical logic. Even though it is traditionally state that Prolog works in classical logic, this is not really the case:
From "Logic Programming with Strong Negation" (David Pearce, Gerd Wagner, FU Berlin, 1991), appears in Springer LNAI 475: Extensions of Logic Programming, International Workshop Tübingen, FRG, December 8–10, 1989 Proceedings):
According to the standard view, a logic program is a set of definite Horn clauses. Thus, logic programs are regarded as syntactically restricted first-order theories within the framework of classical logic. Correspondingly, the proof theory of logic programs is considered as the specialized version of classical resolution, known as SLD-resolution. This view, however, neglects the fact that a program clause, a_0 <— a_1, a_2, • • •, a_n, is an expression of a fragment of positive logic (a subsystem of intuitionistic logic) rather than an implicational formula of classical logic. The classical interpretation of logic programs, therefore, seems to be a semantical overkill.
It should be clear that in order to explain the deduction mechanism of Prolog one does not have to refer to the indirect method of SLD-resolution which checks for the refutability of the contrary. It is certainly more natural to view Prolog's proof procedure as a kind of natural deduction, as, for example, in [Hallnäs & Schroeder-Heister 1987] and [Miller 1989]. This also is more in line with the intuitions of a Prolog programmer. Since Prolog is the paradigm, logic programming semantics should take it as a point of departure.
Now:
How are the quantors of first order predicate logic represented and related
to fact, rule, predicate or the Prolog concept in general?
That is a long story. Note that Prolog is primarily about "programming using logic", and also about "modeling using logic". The two aspects certainly overlap well for problems that can be solved using explicit enumeration, but Prolog is not made for specifying general FOL constraints describing a sought-for solution. In fact, certain FOL constraints cannot be represented and other have to be transformed into nominally equivalent expression that are agreeable to the machine. Look up "skolemization". For example: https://www.cs.toronto.edu/~sheila/384/w11/Lectures/csc384w11-KR-tutorial.pdf
On the flip side, Prolog provides "meta-predicates" which generate solutions by calling other predicates, so it's making forays into second-order logic. As it must - nobody can survive in the FOL desert for long.
What does the functor express
Nothing. It just stands for itself. Pure syntax. Look up "Herbrand Universe".
How do I formulate this in predicate logic and first order predicate logic
what is the semantic and logic difference between
vertical(line).
line(vertical).
It's you who imbues vertical and line with meaning. So, feelings. You want a "vertial line", so you would say, the "thing" is the "line" and "vertical" is an attribute of the "line". So vertical(line) sounds appropriate. Or maybe attribute(line,vertical). It depends.
Here:
point(X,Y).
line(point(W,X), point(Y,Z)).
You have to aspects:
Predicates express "relationships". "Function symbols" are used to construct "things with structure": you can form trees of stuff with function symbols on nodes and integers/strings/variables on leaves. These are called "term". But terms can appear as predicates or as things, depending on the context, it's quite fluid. So you can for example construct a Prolog program with another Prolog program.
point(X,Y)
line(point(W,X), point(Y,Z))
These are terms!
If you type this into a file program.pl:
point_on_line(point(X,Y),line(point(W,X), point(Y,Z))).
The terms appear as "things" related by predicate point_on_line/2. The whole line is itself a term.
If you type this into a file program.pl:
point(X,Y).
line(point(W,X), point(Y,Z)).
The terms appear as "predicates", and point appears both as predicate point/2 and as "thing" about which predicate line/2 is talking.
This is actually a vast subject and it takes some time getting used to it, much more than functional programming. I had some Prolog and Logic courses at uni but 20 years later I found out that I had badly misunderstood a lot of aspects.

Related

functor vs predicate - definition for students

The question of the difference between a functor and a predicate in prolog is asked often.
I am trying to develop an informal definition that is suitable for new students.
A functor is the name of a predicate. The word functor is used when
discussing syntax, such as arity, affix type, and relative priority
over other functors. The word predicate is used when discussing
logical and procedural meaning.
This looks "good enough" to me.
Question: Is it good enough, or is it fundamentally flawed?
To be clear, I am aiming to develop a useful intuition, not write legalistic text for an ISO standard!
The definition in https://www.swi-prolog.org/pldoc/man?section=glossary is:
"functor: Combination of name and arity of a compound term. The term foo(a,b,c) is said to be a term belonging to the functor foo/3." This does not help a lot, and certainly doesn't explain the difference from a predicate, which is defined: "Collection of clauses with the same functor (name/arity). If a goal is proved, the system looks for a predicate with the same functor, then uses indexing to select candidate clauses and then tries these clauses one-by-one. See also backtracking.".
One of the things that often confuses students is that foo(a) could be a term, a goal, or a clause head, depending on the context.
One way to think about term versus predicate/goal is to treat call/1 as if it is implemented by an "infinite" number of clauses that look like this:
call(foo(X)) :- foo(X).
call(foo(X,Y)) :- foo(X,Y).
call(bar(X)) :- bar(X).
etc.
This is why you can pass around at term (which is just data) but treat it as a "goal". So, in Prolog there's no need to have a special "closure" or "thunk" or "predicate" data type - everything can be treated as just data and can be executed by use of the call/1 predicate.
(There are also variations on "call", such as call/2, which can be defined as:
call(foo, X) :- foo(X).
call(foo(X), Y) :- foo(X, Y).
etc.)
This can be used to implement "meta-predicates", such as maplist/2, which takes a list and applies a predicate to each element:
?- maplist(writeln, [one,two,three]).
one
two
three
where a naïve implementation of maplist/2 is (the actual implementation is a bit more complicated, for efficiency):
maplist(_Goal, []).
maplist(Goal, [X|Xs]) :-
call(Goal, X),
maplist(Goal, Xs).
The answer by Peter Ludemann is already very good. I want to address the following from your question:
To be clear, I am aiming to develop a useful intuition, not write legalistic text for an ISO standard!
If you want to develop intuition, don't bother writing definitions. Definitions end up being written in legalese or are useless as definitions. This is why we sometimes explain by describing how the machine will behave, this is supposedly well-defined, while any statement written in natural language is by definition ambiguous. It is interpreted by a human brain, and you have no idea what is in this brain when it interprets it. As a defense, you end up using legalese to write definitions in natural language.
You can give examples, which will leave some impression and probably develop intuition.
"The Prolog compound term a(b, c) can be described by the functor a/2. Here, a is the term name, and 2 is its arity".
"The functor foo/3 describes any term with a name foo and three arguments."
"Atomic terms by definition have arity 0: for example atoms or numbers. The atom a belongs to the functor a/0."
"You can define two predicates with the same name, as long as they have a different number of arguments."
There is also the possibility of confusion because some system predicates that allow introspection might take either a functor or the head of the predicate they work on. For example, abolish/1 takes a functor, while retractall/1 takes the predicate head.....

does prolog backtracking/search always follow the same scheme?

The following prolog code establishes a very simple grammar for sentences (sentence = object + verb + subject), and provides some small vocabulary.
% Example 05 - Generating Sentences
% subjects, verbs and objects
subject(john).
subject(jane).
verb(eats).
verb(washes).
object(apples).
object(spinach).
% sentence = subject + verb + object
sentence(X,Y,Z) :- subject(X), verb(Y), object(Z).
% sentence as a list
sentence(S) :- S=[X, Y, Z], subject(X), verb(Y), object(Z).
When asked to generate valid sentences, swi-prolog (specifically swish.swi-prolog.org) generates them in the following order:
?- sentence(S).
S = [john, eats, apples]
S = [john, eats, spinach]
S = [john, washes, apples]
S = [john, washes, spinach]
S = [jane, eats, apples]
S = [jane, eats, spinach]
S = [jane, washes, apples]
S = [jane, washes, spinach]
Question
The above suggests that prolog always backtracks from the right to the left of conjunctive queries. Is this true for all prologs? Is it part of the specification? If not, is it common enough to be relied upon?
Notes
For clarity, by backtracking from the right, I mean that Z is unbound and rebound to find all possibilities, given the first matches for X and Y. Then after these have been exhausted, Y is unbound and rebound, and for each Y, different Z are tested. Finally it is X that is unbound then rebound to new values, and for each X the combinations of Y and Z are generated again.
Short answer: yes. Prolog always uses this same well defined scheme also known as chronological backtracking together with (one specific instance) of SLD-resolution.
But that needs some elaboration.
Prolog systems stick to that very strategy because it is quite efficient to implement and leads in many cases directly to the desired result. For those cases where Prolog works nicely it is pretty much competitive with imperative programming languages for many tasks. Some systems even translate to machine code, the most prominent being the just-in-time compiler of sicstus-prolog.
As you have probably already encountered, there are, however, cases where that strategy leads to undesirable inefficiencies and even non-termination whereas another strategy would produce an answer. So what to do in such situations?
Firstly, the precise encoding of a problem may be reformulated. To take your case of grammars, we even have a specific formalism for this, called Definite Clause Grammars, dcg. It is very compact and leads to both efficient parsing and efficient generation for many cases. This insight (and the precise encoding) was not that evident for quite some time. And the precise moment of Prolog's birth (pretty exactly) 50 years ago was when this was understood. In the example you have, you have just 3 tokens in a list, but most of the time that number can vary. It is there where the DCG formalism shines and still can be used both to parse and generate sentences. In your example, say you also want to include subjects with unrestricted length like [the,boy], [the,nice,boy], [the,nice,and,handsome,boy], ...
There are many such encoding techniques to learn.
Another way how Prolog's strategy is further improved is to offer more flexible selection strategies with built-ins like freeze/2, when/2 and similar coroutining methods. While such extensions exist for quite some time, they are difficult to employ. Particularly because understanding non-termination gets even more complex.
A more successful extension are constraints (constraint-programming), most prominently clpz/clpfd which are used primarily for combinatorial problems. While chronological backtracking is still in place, it is only used as a last resort either to ensure correctness of solutions with labeling/2 or when there is no better way to express the actual problem.
And finally, you may want to reconsider Prolog's strategy in a more fundamental way. This is all possible by means of meta-interpretation. In some sense this is a complete new implementation, but it can often use a lot of Prolog's infrastructure thereby making such meta-interpreters quite compact compared to other programming languages. And, it may not only be used to implement other strategies, it is even used to prototype and implement other programming languages. The most prominent example being erlang which first existed as a Prolog meta-interpreter, its syntax still being quite Prologish.
Prolog as a programming language contains also many features that do not fit into this pure view, like side effecting built-ins like put_char/1 which are clearly a hindrance in meta-interpretation. But in many such situations this can be mitigated by restricting their use only to specific modes and producing instantiation errors otherwise. Think of (non-constraint based) arithmetics which produces an error if the result cannot be determined immediately, but still produces correct results when used with sufficiently instantiated arguments like in
?- X > 0, X = -1.
error(instantiation_error,(is)/2).
?- X = -1, X > 0.
false.
?- X = 2, X > 0.
X = 2.
Finally, a word on non-termination. Often non-termination is seen as a fundamental weakness of Prolog. But there is another view on this. Also other much older systems or engines suffer (from time-to-time) runaways. And they are still used. In the case of programming languages, runaways are a fundamental consequence of their generality. And a non-terminating query is still preferable to an incorrect but terminating query.

How to identify wasteful representations of Prolog terms

What is the Prolog predicate that helps to show wasteful representations of Prolog terms?
Supplement
In a aside of an earlier Prolog SO answer, IIRC by mat, it used a Prolog predicate to analyze a Prolog term and show how it was overly complicated.
Specifically for a term like
[op(add),[[number(0)],[op(add),[[number(1)],[number(1)]]]]]
it revealed that this has to many [].
I have searched my Prolog questions and looked at the answers twice and still can't find it. I also recall that it was not in SWI-Prolog but in another Prolog so instead of installing the other Prolog I was able to use the predicate with an online version of Prolog.
If you read along in the comments you will see that mat identified the post I was seeking.
What I was seeking
I have one final note on the choice of representation. Please try out the following, using for example GNU Prolog or any other conforming Prolog system:
| ?- write_canonical([op(add),[Left,Right]]).
'.'(op(add),'.'('.'(_18,'.'(_19,[])),[]))
This shows that this is a rather wasteful representation, and at the same time prevents uniform treatment of all expressions you generate, combining several disadvantages.
You can make this more compact for example using Left+Right, or make all terms uniformly available using for example op_arguments(add, [Left,Right]), op_arguments(number, [1]) etc.
Evolution of a Prolog data structure
If you don't know it already the question is related to writing a term rewriting system in Prolog that does symbolic math and I am mostly concentrating on simplification rewrites at present.
Most people only see math expressions in a natural representation
x + 0 + sin(y)
and computer programmers realize that most programming languages have to parse the math expression and convert it into an AST before using
add(add(X,0),sin(Y))
but most programming languages can not work with the AST as written above and have to create data structures See: Compiler/lexical analyzer, Compiler/syntax analyzer, Compiler/AST interpreter
Now if you have ever done more than dipped your toe in the water when learning about Prolog you will have come across Program 3.30 Derivative rules, which is included in this, but the person did not give attribution.
If you try and roll your own code to do symbolic math with Prolog you might try using is/2 but quickly find that doesn't work and then find that Prolog can read the following as compound terms
add(add(X,0),sin(Y))
This starts to work until you need to access the name of the functor and find functor/3 but then we are getting back to having to parse the input, however as noted by mat and in "The Art of Prolog" if one were to make the name of the structure accessible
op(add,(op(add,X,0),op(sin,Y)))
now one can access not only the terms of the expression but also the operator in a Prolog friendly way.
If it were not for the aside mat made the code would still be using the nested list data structure and now is being converted to use the compound terms that expose the name of the structure. I wonder if there is a common phrase to describe that, if not there should be one.
Anyway the new simpler data structure worked on the first set of test, now to see if it holds up as the project is further developed.
Try it for yourself online
Using GNU Prolog at tutorialspoint.com enter
:- initialization(main).
main :- write_canonical([op(add),[Left,Right]]).
then click Execute and look at the output
sh-4.3$ gprolog --consult file main.pg
GNU Prolog 1.4.4 (64 bits)
Compiled Aug 16 2014, 23:07:54 with gcc
By Daniel Diaz
Copyright (C) 1999-2013 Daniel Diaz
compiling /home/cg/root/main.pg for byte code...
/home/cg/root/main.pg:2: warning: singleton variables [Left,Right] for main/0
/home/cg/root/main.pg compiled, 2 lines read - 524 bytes written, 9 ms
'.'(op(add),'.'('.'(_39,'.'(_41,[])),[]))| ?-  
Clean vs. defaulty representations
From The Power of Prolog by Markus Triska
When representing data with Prolog terms, ask yourself the following question:
Can I distinguish the kind of each component from its outermost functor and arity?
If this holds, your representation is called clean. If you cannot distinguish the elements by their outermost functor and arity, your representation is called defaulty, a wordplay combining "default" and "faulty". This is because reasoning about your data will need a "default case", which is applied if everything else fails. In addition, such a representation prevents argument indexing, and is considered faulty due to this shortcoming. Always aim to avoid defaulty representations! Aim for cleaner representations instead.
Please see the last part of:
https://stackoverflow.com/a/42722823/1613573
It uses write_canonical/1 to display the canonical representation of a term.
This predicate is very useful when learning Prolog and helps to clear several misconceptions that are typical for beginners. See for example the recent question about hyphens, where it would have helped too.
Note that in SWI, the output deviates from canonical Prolog syntax in general, so I am not using SWI when explaining Prolog syntax.
You could also programmatially count how many subterms are a single-element list using something like this (not optimized);
single_element_list_subterms(Term, Index) :-
Term =.. [Functor|Args],
( Args = []
-> Index = 0
; maplist(single_element_list_subterms, Args, Indices),
sum_list(Indices, SubIndex),
( Functor = '.', Args = [_, []]
-> Index is SubIndex + 1
; Index = SubIndex
)
).
Trying it on the example compound term:
| ?- single_element_list_subterms([op(add),[[number(0)],[op(add),[[number(1)],[number(1)]]]]], Count).
Count = 7
yes
| ?-
Indicating that there are 7 subterms consisting of a single-element list. Here is the result of write_canonical:
| ?- write_canonical([op(add),[[number(0)],[op(add),[[number(1)],[number(1)]]]]]).
'.'(op(add),'.'('.'('.'(number(0),[]),'.'('.'(op(add),'.'('.'('.'(number(1),[]),'.'('.'(number(1),[]),[])),[])),[])),[]))
yes
| ?-

Prolog: Can you make a predicate behave differently depending on whether a value is ground or not?

I have a somewhat complex predicate with four arguments that need to work when both the first and last arguments are ground/not ground, not ground/ground or ground/ground, and the second and third arguments are ground.
i.e. predicate(A,B,C,D).
I can't provide my actual code since it is part of an assignment.
I have it mostly working, but am receiving instantiation errors when A is not ground, but D is. However, I have singled out a line of code that is causing issues. When I change the goal order of the predicate, it works when D is ground and A is not, but in doing so, it no longer works for when A is ground and D is not. I'm not sure there is a way around this.
Is there a way to use both lines of code so that if the A is ground for instance it will use the first line, but if A is not ground, it will use the second, and ignore the first? And vice versa.
You can do that, but, almost invariably, you will break the declarative semantics of your programs if you do that.
Consider a simple example to see how such a non-monotonic and extra-logical predicate already breaks basic assumptions and typical declarative properties of well-known predicates, like commutativity of conjunction:
?- ground(X), X = a.
false.
But, if we simply exchange the goals by commutativity of conjunction, we get a different answer:
?- X = a, ground(X).
X = a.
For this reason, such meta-logical predicates are best avoided, especially if you are just beginning to learn the language.
Instead, better stay in the pure and monotonic subset of Prolog. Use constraints like dif/2 and CLP(FD) to make your programs usable in all directions, increasing generality and ease of understanding.
See logical-purity, prolog-dif and clpfd for more information.

How to avoid using assert and retractall in Prolog to implement global (or state) variables

I often end up writing code in Prolog which involves some arithmetic calculation (or state information important throughout the program), by means of first obtaining the value stored in a predicate, then recalculating the value and finally storing the value using retractall and assert because in Prolog we cannot assign values to variable twice using is (thus making almost every variable that needs modification, global). I have come to know that this is not a good practice in Prolog. In this regard I would like to ask:
Why is it a bad practice in Prolog (though i myself don't like to go through the above mentioned steps just to have have a kind of flexible (modifiable) variable)?
What are some general ways to avoid this practice? Small examples will be greatly appreciated.
P.S. I just started learning Prolog. I do have programming experience in languages like C.
Edited for further clarification
A bad example (in win-prolog) of what I want to say is given below:
:- dynamic(value/1).
:- assert(value(0)).
adds :-
value(X),
NewX is X + 4,
retractall(value(_)),
assert(value(NewX)).
mults :-
value(Y),
NewY is Y * 2,
retractall(value(_)),
assert(value(NewY)).
start :-
retractall(value(_)),
assert(value(3)),
adds,
mults,
value(Q),
write(Q).
Then we can query like:
?- start.
Here, it is very trivial, but in real program and application, the above shown method of global variable becomes unavoidable. Sometimes the list given above like assert(value(0))... grows very long with many more assert predicates for defining more variables. This is done to make communication of the values between different functions possible and to store states of variables during the runtime of program.
Finally, I'd like to know one more thing:
When does the practice mentioned above become unavoidable in spite of various solutions suggested by you to avoid it?
The general way to avoid this is to think in terms of relations between states of your computations: You use one argument to hold the state that is relevant to your program before a calculation, and a second argument that describes the state after some calculation. For example, to describe a sequence of arithmetic operations on a value V0, you can use:
state0_state(V0, V) :-
operation1_result(V0, V1),
operation2_result(V1, V2),
operation3_result(V2, V).
Notice how the state (in your case: the arithmetic value) is threaded through the predicates. The naming convention V0 -> V1 -> ... -> V scales easily to any number of operations and helps to keep in mind that V0 is the initial value, and V is the value after the various operations have been applied. Each predicate that needs to access or modify the state will have an argument that allows you to pass it the state.
A huge advantage of threading the state through like this is that you can easily reason about each operation in isolation: You can test it, debug it, analyze it with other tools etc., without having to set up any implicit global state. As another huge benefit, you can then use your programs in more directions provided you are using sufficiently general predicates. For example, you can ask: Which initial values lead to a given outcome?
?- state0_state(V0, given_outcome).
This is of course not readily possible when using the imperative style. You should therefore use constraints instead of is/2, because is/2 only works in one direction. Constraints are much easier to use and a more general modern alternative to low-level arithmetic.
The dynamic database is also slower than threading states through in variables, because it performs indexing etc. on each assertz/1.
1 - it's bad practice because destroys the declarative model that (pure) Prolog programs exhibit.
Then the programmer must think in procedural terms, and the procedural model of Prolog is rather complicate and difficult to follow.
Specifically, we must be able to decide about the validity of asserted knowledge while the programs backtracks, i.e. follow alternative paths to those already tried, that (maybe) caused the assertions.
2 - We need additional variables to keep the state. A practical, maybe not very intuitive way, is using grammar rules (a DCG) instead of plain predicates. Grammar rules are translated adding two list arguments, normally hidden, and we can use those arguments to pass around the state implicitly, and reference/change it only where needed.
A really interesting introduction is here: DCGs in Prolog by Markus Triska. Look for Implicitly passing states around: you'll find this enlighting small example:
num_leaves(nil), [N1] --> [N0], { N1 is N0 + 1 }.
num_leaves(node(_,Left,Right)) -->
num_leaves(Left),
num_leaves(Right).
More generally, and for further practical examples, see Thinking in States, from the same author.
edit: generally, assert/retract are required only if you need to change the database, or keep track of computation result along backtracking. A simple example from my (very) old Prolog interpreter:
findall_p(X,G,_):-
asserta(found('$mark')),
call(G),
asserta(found(X)),
fail.
findall_p(_,_,N) :-
collect_found([],N),
!.
collect_found(S,L) :-
getnext(X),
!,
collect_found([X|S],L).
collect_found(L,L).
getnext(X) :-
retract(found(X)),
!,
X \= '$mark'.
findall/3 can be seen as the basic all solutions predicate. That code should be the very same from Clockins-Mellish textbook - Programming in Prolog. I used it while testing the 'real' findall/3 I implemented. You can see that it's not 'reentrant', because of the '$mark' aliased.

Resources