This is an easy question: I've seen this example in a Prolog text book.
It is implementing if-then-else using a cut.
if_then_else(P, Q, R) :- P, !, Q.
if_then_else(P, Q, R) :- R.
Can anyone explain what this program is doing, and why it is useful?
The most important thing to note about this program that it is definitely not a nice relation.
For example, from a pure logic program, we expect to be able to derive whether the condition had held if we pass it the outcome. This is of course in contrast to procedural programming, where you first check a condition and everything else depends on the condition.
Also other properties are violated. For example, what happens if the condition actually backtracks? Suppose I want to see conclusions for each solution of the condition, not just the first one. Your code cuts away these additional solutions.
I would also like to use the relation in other cases, for example, suppose I want to detect superfluous if-then-else constructs in my code. These are solutions to queries similar to:
?- if_then_else(NoMatter, Same, Same).
If if_then_else/3 were a pure relation, we could use it to answer such queries. As it is currently implemented, it yields incorrect results for such queries.
See logical-purity and if_/3 for more information.
Related
I want to solve following problem using inference making power of prolog.
One day, 3 persons, a, b, c were caught by police at the crime spot. When police settled interrogating them:
i) a says I am innocent
ii) b says a is criminal
iii) c says I am innocent.
Its known that
i) Exactly one person speaks true.
ii) Exactly one criminal is there.
Who is criminal?
To model above problem in First Order logic:
Consider c/1 is a predicate returns true when argument is Criminal
we can write:
(not(c(a)),c(c)) ; (c(c),c(a)).
c(a); c(b); c(c).
(not(c(a)),not(c(b))) ; (not(c(a)),not(c(c))) ; (not(c(b)),not(c(c))).
After modelling above statements in prolog, I will query:
?-c(X).
it should return:
X=c.
But error I got:
"No permission to modify static procedure `(;)/2'"
Since PROLOG does indeed work with Horn clauses, you'll need things of form head :- tail, reading :- as "if."
solve(Solution) :- ...
%With a Solution looking something like:
% solve(a(truth,innocent),b(false,criminal),c(false,innocent)).
To use the generate and test method, which is a common and reasonable way to solve this, you'd do something like this:
solve(Solution) :-
Solution = [a(_,_),b(_,_),c(_,_)],
generate(Solution),
validate(Solution).
generate should give you a well-formed Solution, that is, one having all the variables filled in with values that make some kind of sense (that is, false, true, criminal, innocent).
validate should ensure that the Solution matches the constraints you gave.
solve only completes when one of generate's solutions makes it past validate's constraints.
For an introduction to the generate and test method, see this tutorial.
But if you're writing code that isn't Horn clauses, you might need a tutorial on writing PROLOG functions (OK, relations), like this one.
What's the most idiomatic approach to output prolog code from a prolog program (as a side effect)?
For instance, in a simple case, I might want to write a Program that, given a text input, yields another Prolog program representing the text as a directed graph.
I understand this question is somewhat vague, but I've resigned to consulting Stackoverflow after failing to find a satisfying answer in the available Prolog Meta-Programming literature, which mostly covers applications of meta-circular interpreters.
If you feel this question might be better articulated some other way, please edit it or leave a comment.
The most idiomatic way is always to stay pure and avoid side effects.
Let the toplevel do the writing for you!
To generate a Prolog program, you define a relation that says for example:
program(P) :- ...
and then states, in terms of logical relations, what holds about P.
For example:
program(P) :-
P = ( Head :- Body ),
Head = head(A, B),
Body = body(A, B).
Example query and answer:
?- program(P).
P = (head(_G261, _G262):-body(_G261, _G262)).
So, there's your program, produced in a pure way.
If you want to write it, use portray_clause/1:
?- program(P), portray_clause(P).
head(A, B) :-
body(A, B).
...
This can be useful in a failure driven loop, to produce many programs automatically.
writeq/1 (or format('~q', [...])) produces output that can be read back. Usually you need also to put a full stop after a clause body. For instance, try
?- A_Clause = (X is 1+Y, write('X is '), write(X), nl), format('~q.~n', [A_Clause]).
Readability of code suffers from loosing variables 'nice names', but the functionality is there...
edit
as noted by #false, a space before the dot will avoid a bug in case the output term would finish with a dot
This is a somewhat silly example but I'm trying to keep the concept pretty basic for better understanding. Say I have the following unary relations:
person(steve).
person(joe).
fruit(apples).
fruit(pears).
fruit(mangos).
And the following binary relations:
eats(steve, apples).
eats(steve, pears).
eats(joe, mangos).
I know that querying eats(steve, F). will return all the fruit that steve eats (apples and pears). My problem is that I want to get all of the fruits that Steve doesn't eat. I know that this: \+eats(steve,F) will just return "no" because F can't be bound to an infinite number of possibilities, however I would like it to return mangos, as that's the only existing fruit possibility that steve doesn't eat. Is there a way to write this that would produce the desired result?
I tried this but no luck here either: \+eats(steve,F), fruit(F).
If a better title is appropriate for this question I would appreciate any input.
Prolog provides only a very crude form of negation, in fact, (\+)/1 means simply "not provable at this point in time of the execution". So you have to take into account the exact moment when (\+)/1 is executed. In your particular case, there is an easy way out:
fruit(F), \+eats(steve,F).
In the general case, however, this is far from being fixed easily. Think of \+ X = Y, see this answer.
Another issue is that negation, even if used properly, will introduce non-monotonic properties into your program: By adding further facts for eats/2 less might be deduced. So unless you really want this (as in this example where it does make sense), avoid the construct.
So from what I understand about deterministic predicates:
Deterministic predicate = 1 solution
Non-deterministic predicate = multiple solutions
Are there any type of rules as to how you can detect if the predicate is one or the other? Like looking at the search tree, etc.
There is no clear, generally accepted consensus about these notions. However, they are usually based rather on the observed answers and not based on the number of solutions. In certain contexts the notions are very implementation related. Non-determinate may mean: leaves a choice point open. And sometimes determinate means: never even creates a choice point.
Answers vs. solutions
To see the difference, consider the goal length(L, 1). How many solutions does it have? L = [a] is one, L = [23] another... but all of these solutions are compactly represented with a single answer substitution: L = [_] which thus contains infinitely many solutions.
In any case, in all implementations I know of, length(L, 1) is a determinate goal.
Now consider the goal repeat which has exactly one solution, but infinitely many answers. This goal is considered non-determinate.
In case you are interested in constraints, things become even more evolved. In library(clpfd), the goal X #> Y, Y #> X has no solution, but still one answer. Combine this with repeat: infinitely many answers and no solution.
Further, the goal append(Xs, Ys, []) has exactly one solution and also exactly one answer, nevertheless it is considered non-determinate in many implementations, since in those implementations it leaves a choice point open.
In an ideal implementation, there would be no answers without solutions (except false), and there would be non-determinism only when there is more than one answer. But then, all of this is mostly undecidable in the general case.
So, whenever you are using these notions make sure on what level things are meant. Rather explicitly say: multiple answers, multiple solutions, leaves no (unnecessary) choice point open.
You need understand the difference between det, semidet and undet, it is more than just number of solutions.
Because there is no loop control operator in Prolog, looping (not recursion) is constructed as a 'sequence generating' predicate (undet) followed by the loop body. Also you can store solutions with some of findall-group predicates as a list and loop later with the member/2 predicate.
So, any piece of your program is either part of loop construction or part of usual flow. So, there is a difference in designing det and undet predicates almost in the intended usage. If you can work with a sequence you always do undet and comment it as so. There is a nice unit-test extension in swi-prolog which can check wheter your predicate always the same in mean of det/semidet/undet (semidet is for usage the same way as undet but as a head of 'if' construction).
So, the difference is pre-design, and this question should not be arised with already existing predicates. It is a good practice always comment the intended usage in a comment like.
% member(?El, ?List) is undet.
Deterministic: Always succeeds with a single answer that is always the same for the same input. Think a of a static list of three items, and you tell your function to return value one. You will get the same answer every time. Additionally, arithmetic functions. 1 + 1 = 2. X + Y = Z.
Semi-deterministic: Succeeds with a single answer that is always the same for the same input, but it can fail. Think of a function that takes a list of numbers, and you ask your function if some number exists in the list. It either does, or it doesn't, based on the contents of the list given and the number asked.
Non-deterministic: Succeeds with a single answer, but can exhibit different behaviors on different runs, even for the same input. Think any kind of math.random(min,max) function like random/3
In essence, this is entirely separate from the concept of choice points, as choice points are a function of Prolog. Where I think the Prolog confusion of these terms comes from is that Prolog can find a single answer, then go back and try for another solution, and you have to use the cut operator ! to tell it that you want to discard your choice points explicitly.
This is very useful to know when working with Prolog Unit Testing
I often end up writing code in Prolog which involves some arithmetic calculation (or state information important throughout the program), by means of first obtaining the value stored in a predicate, then recalculating the value and finally storing the value using retractall and assert because in Prolog we cannot assign values to variable twice using is (thus making almost every variable that needs modification, global). I have come to know that this is not a good practice in Prolog. In this regard I would like to ask:
Why is it a bad practice in Prolog (though i myself don't like to go through the above mentioned steps just to have have a kind of flexible (modifiable) variable)?
What are some general ways to avoid this practice? Small examples will be greatly appreciated.
P.S. I just started learning Prolog. I do have programming experience in languages like C.
Edited for further clarification
A bad example (in win-prolog) of what I want to say is given below:
:- dynamic(value/1).
:- assert(value(0)).
adds :-
value(X),
NewX is X + 4,
retractall(value(_)),
assert(value(NewX)).
mults :-
value(Y),
NewY is Y * 2,
retractall(value(_)),
assert(value(NewY)).
start :-
retractall(value(_)),
assert(value(3)),
adds,
mults,
value(Q),
write(Q).
Then we can query like:
?- start.
Here, it is very trivial, but in real program and application, the above shown method of global variable becomes unavoidable. Sometimes the list given above like assert(value(0))... grows very long with many more assert predicates for defining more variables. This is done to make communication of the values between different functions possible and to store states of variables during the runtime of program.
Finally, I'd like to know one more thing:
When does the practice mentioned above become unavoidable in spite of various solutions suggested by you to avoid it?
The general way to avoid this is to think in terms of relations between states of your computations: You use one argument to hold the state that is relevant to your program before a calculation, and a second argument that describes the state after some calculation. For example, to describe a sequence of arithmetic operations on a value V0, you can use:
state0_state(V0, V) :-
operation1_result(V0, V1),
operation2_result(V1, V2),
operation3_result(V2, V).
Notice how the state (in your case: the arithmetic value) is threaded through the predicates. The naming convention V0 -> V1 -> ... -> V scales easily to any number of operations and helps to keep in mind that V0 is the initial value, and V is the value after the various operations have been applied. Each predicate that needs to access or modify the state will have an argument that allows you to pass it the state.
A huge advantage of threading the state through like this is that you can easily reason about each operation in isolation: You can test it, debug it, analyze it with other tools etc., without having to set up any implicit global state. As another huge benefit, you can then use your programs in more directions provided you are using sufficiently general predicates. For example, you can ask: Which initial values lead to a given outcome?
?- state0_state(V0, given_outcome).
This is of course not readily possible when using the imperative style. You should therefore use constraints instead of is/2, because is/2 only works in one direction. Constraints are much easier to use and a more general modern alternative to low-level arithmetic.
The dynamic database is also slower than threading states through in variables, because it performs indexing etc. on each assertz/1.
1 - it's bad practice because destroys the declarative model that (pure) Prolog programs exhibit.
Then the programmer must think in procedural terms, and the procedural model of Prolog is rather complicate and difficult to follow.
Specifically, we must be able to decide about the validity of asserted knowledge while the programs backtracks, i.e. follow alternative paths to those already tried, that (maybe) caused the assertions.
2 - We need additional variables to keep the state. A practical, maybe not very intuitive way, is using grammar rules (a DCG) instead of plain predicates. Grammar rules are translated adding two list arguments, normally hidden, and we can use those arguments to pass around the state implicitly, and reference/change it only where needed.
A really interesting introduction is here: DCGs in Prolog by Markus Triska. Look for Implicitly passing states around: you'll find this enlighting small example:
num_leaves(nil), [N1] --> [N0], { N1 is N0 + 1 }.
num_leaves(node(_,Left,Right)) -->
num_leaves(Left),
num_leaves(Right).
More generally, and for further practical examples, see Thinking in States, from the same author.
edit: generally, assert/retract are required only if you need to change the database, or keep track of computation result along backtracking. A simple example from my (very) old Prolog interpreter:
findall_p(X,G,_):-
asserta(found('$mark')),
call(G),
asserta(found(X)),
fail.
findall_p(_,_,N) :-
collect_found([],N),
!.
collect_found(S,L) :-
getnext(X),
!,
collect_found([X|S],L).
collect_found(L,L).
getnext(X) :-
retract(found(X)),
!,
X \= '$mark'.
findall/3 can be seen as the basic all solutions predicate. That code should be the very same from Clockins-Mellish textbook - Programming in Prolog. I used it while testing the 'real' findall/3 I implemented. You can see that it's not 'reentrant', because of the '$mark' aliased.