Are algebraic predicates supported in Mercury? - mercury

I'm very new to Mercury and logic programming in general. I haven't found a numeric example like this in the docs or samples...
Take the example predicate:
:- pred diffThirtyFour(float, float).
:- mode diffThirtyFour(in, out) is det.
diffThirtyFour(A,B) :-
( B = A + 34.0 ).
With this, A must be ground, and B is free. What if I want A to be free and B to be ground (eg, adding mode diffThirtyFour(out,in) is det.). Can this sort of algebra be performed at compile time? I could easily define another predicate, but that doesn't seem very logical...
Update
So, something like this kind of works:
:- pred diffThirtyFour(float, float).
:- mode diffThirtyFour(in, out) is semidet.
:- mode diffThirtyFour(out, in) is semidet.
diffThirtyFour(A,B) :-
( B = A + 34.0, A = B - 34.0 ).
A little wary of the semidet, and the redundancy of the second goal. Is this the only way of doing it?
Update 2
This might be the answer...it issues a warning at compile time about the disjunct never having any solutions. A correct warning, but perhaps unnecessary code-smell? This does what I need, but if there are better solutions out there, feel free to post them...
:- pred diffThirtyFour(float, float).
:- mode diffThirtyFour(in, out) is det.
:- mode diffThirtyFour(out, in) is det.
diffThirtyFour(A,B) :-
( A = B - 34.0,
B = A + 34.0
;
error("The impossible happened...")
).

Just discovered the ability to enter different clauses for different modes. This isn't quite an algebraic solver (which I wouldn't have expected anyway), but offers the precise organizational structure I was looking for:
:- pred diffThirtyFour(float, float).
:- mode diffThirtyFour(in, out) is det.
:- mode diffThirtyFour(out, in) is det.
:- pragma promise_pure(diffThirtyFour/2).
diffThirtyFour(A::out,B::in) :- A = B - 34.0.
diffThirtyFour(A::in, B::out) :- B = A + 34.0.
As described in the link, the promise_pure pragma is required because this feature can be used to break semantic consistency. It would also suffice to use the promise_equivalent_clauses pragma, which promises logical consistency without purity. It is still possible to declare clauses with inconsistent semantics with the impure keyword in the pred declaration.
Interestingly, addition and subtraction in the standard int module are inversible, but not in the float module. Perhaps the choice was made because of possible errors in floating point arithmetic....

Related

PROLOG - Get a list of all the rules an entity verifies

I'm formalizing linguistic data into predicates and entities and doing some reasoning in prolog. Imagine I begin with:
breathe(X) :- snore(X).
sleep(X) :- snore(X).
rest(X) :- sleep(X).
live(X) :- breathe(X); eat(X); sleep(X).
snore(john).
sleep(lucy).
My data can get big enough and I would like to get a list of entities and predicates in order to iterate them and check how many predicates an entity verifies, the output can be lists like:
[john, [snore, breathe, sleep, rest, live]]
[lucy, [sleep, rest]]
or predicates
participant(john, [snore, breathe, sleep, rest, live]).
participant(lucy, [sleep, rest]).
Thanks for your help, I have no clue at this moment.
Representing live knowledge about an abstract world can get messy. There are a lot of different possibilities, and a lot of variance depending of which Prolog system you're using.
Here is an example of your code running in SWI-Prolog, but the same idea should work (more or less) unchanged on any Prolog out there that provides call/N and setof/3 builtins.
:- module(list_entities_that_verify_a_pred,
[participant/2]).
:- redefine_system_predicate(sleep/1).
:- discontiguous breathe/1,sleep/1,rest/1,live/1,snore/1.
breathe(X) :- snore(X).
sleep(X) :- snore(X).
rest(X) :- sleep(X).
live(X) :- breathe(X); /* eat(X);*/ sleep(X).
snore(john).
sleep(lucy).
to_verify(breathe).
to_verify(sleep).
to_verify(rest).
to_verify(live).
to_verify(snore).
participant(Person,Verified) :-
setof(Pred,(to_verify(Pred),call(Pred,Person)),Verified).
First, note I have commented the call to eat/1, to avoid a missing definition exception, so we can try to call the partecipant/2 predicate:
?- participant(P,L).
P = john,
L = [breathe, live, rest, sleep, snore] ;
P = lucy,
L = [live, rest, sleep].
From an architecture viewpoint, the main point to note it's the introduction of to_verify/1, to ease the workflow.
An alternative is using forward chaining using Constraint Handling Rules, an underappreciated paradigm of computation.
This is done using SWI-Prolog's CHR library. The rule engine is implemented "on top of Prolog" and adding a rule to the "constraint store" looks like calling a Prolog goal. The "constraint store" holding the current state of computation disappears once the goal completes.
Note that I'm currently not 100% certain of CHR semantics (it seems my brain is hardwired to read Prolog now) but this code seems to work.
In file sleep.pl:
:- module(forward,[
snore/1,
sleep/1,
collect/2,
pull/2
]).
:- use_module(library(chr)).
:- chr_constraint snore/1, sleep/1, breathe/1.
:- chr_constraint eat/1, live/1, rest/1, collect/2, pull/2.
snore(X) ==> breathe(X).
snore(X) ==> sleep(X).
sleep(X) ==> rest(X).
breathe(X) ==> live(X).
eat(X) ==> live(X).
sleep(X) ==> live(X).
live(X) \ live(X) <=> true. % eliminates duplicates
collect(Who,L),snore(Who) <=> collect(Who,[snore|L]).
collect(Who,L),sleep(Who) <=> collect(Who,[sleep|L]).
collect(Who,L),breathe(Who) <=> collect(Who,[breathe|L]).
collect(Who,L),eat(Who) <=> collect(Who,[eat|L]).
collect(Who,L),live(Who) <=> collect(Who,[live|L]).
collect(Who,L),rest(Who) <=> collect(Who,[rest|L]).
pull(Who,L) \ collect(Who2,L2) <=> Who = Who2, L = L2.
Now we just need to start SWI-Prolog, and issue these commands:
?- [sleep].
true.
?- sleep(lucy),
collect(lucy,[]),
pull(Who,L).
Who = lucy,
L = [rest,live,sleep],
pull(lucy,[rest,live,sleep]).
?- snore(john),
collect(john,[]),
pull(Who,L).
Who = john,
L = [rest,live,breathe,sleep,snore],
pull(john,[rest,live,breathe,sleep,snore]).
You asked in a comment to Carlo's answer:
Is there a way to expose the eat predicate and the like? It looks like
I'm going to have a lot of unmatched predicate networks like owl and
proton because I'm going to use few facts and a lot of semantic
relations from Wordnet.
The issue here seems to be one of closed-world assumption (CWA) where predicates are declared (and thus can be called without generating errors) but not necessarily defined. In this case, as per CWA, what we cannot prove is true, is assumed to be false. E.g. the eat/1 predicate in your example or the "lot of unmatched predicate networks" in your comment.
A possible solution would be to define your entities as Logtalk objects that implement a protocol (or a set of protocols) that declare all the predicates you want to use. Reusing your example:
:- protocol(predicates).
:- public([
breathe/0, sleep/0, rest/0, live/0,
snore/0, eat/0
]).
:- end_protocol.
:- category(generic,
implements(predicates)).
breathe :- ::snore.
sleep :- ::snore.
rest :- ::sleep.
live :- ::breathe; ::eat; ::sleep.
:- end_category.
:- object(john, imports(generic)).
snore.
:- end_object.
:- object(lucy, imports(generic)).
sleep.
:- end_object.
If we ask an entity (object) about a predicate that it doesn't define, the query will simply fail. For example (using SWI-Prolog as backend here but you can use most Prolog systems; assuming the code above is saved in a code.lgt file):
$ swilgt
...
?- {code}.
...
?- lucy::eat.
false.
If we want to find all objects that satisfy e.g. the sleep/0 predicate:
?- findall(Object,
(current_object(Object),
conforms_to_protocol(Object, predicates),
Object::sleep),
Objects).
Objects = [john, lucy].
If we want to query all predicates satisfied by the objects (here with the simplifying assumption that all those predicates have zero arity):
?- setof(
Name,
Arity^(current_object(Object),
conforms_to_protocol(Object, predicates),
Object::current_predicate(Name/Arity),
Object::Name),
Predicates
).
Object = john,
Predicates = [breathe, live, rest, sleep, snore] ;
Object = lucy,
Predicates = [live, rest, sleep].
But, at least for the most common queries, handy predicate definitions would preferably be added to the generic category.
P.S. For more on the closed-world assumption and predicate semantics and also why Prolog modules fail to provide a sensible alternative solution see e.g. https://logtalk.org/2019/09/30/predicate-semantics.html

Computer Reasoning about Prologish Boolos' Curious Inference

Boolo's curious inference has been originally formulated with equations here. It is a recursive definition of a function f and a predicate d via the syntax of N+, the natural numbers without zero, generated from 1 and s(.).
But it can also be formulated with Horn Clauses. The logical content is not exactly the same, the predicate f captures only the positive aspect of the function, but the problem type is the same. Take the following Prolog program:
f(_, 1, s(1)).
f(1, s(X), s(s(Y))) :- f(1, X, Y).
f(s(X), s(Y), T) :- f(s(X), Y, Z), f(X, Z, T).
d(1).
d(s(X)) :- d(X).
Whats the theoretical logical outcome of the last query, and can you demonstrably have a computer program in our time and space that produces the outcome, i.e. post the program on gist and everybody can run it?
?- f(X,X,Y).
X = 1,
Y = s(1)
X = s(1),
Y = s(s(s(1)))
X = s(s(1)),
Y = s(s(s(s(s(s(s(s(s(s(...))))))))))
ERROR: Out of global stack
?- f(s(s(s(s(1)))), s(s(s(s(1)))), X), d(X).
If the program that does the job of certifying the result is not a Prolog interpreter itself like here, what would do the job especially suited for this Prologish problem formulation?
One solution: Abstract interpretation
Preliminaries
In this answer, I use an interpreter to show that this holds. However, it is not a Prolog interpreter, because it does not interpret the program in exactly the same way Prolog interprets the program.
Instead, it interprets the program in a more abstract way. Such interpreters are therefore called abstract interpreters.
Program representation
Critically, I work directly with the source program, using only modifications that we, by purely algebraic reasoning, know can be safely applied. It helps tremendously for such reasoning that your source program is completely pure by construction, since it only uses pure predicates.
To simplify reasoning about the program, I now make all unifications explicit. It is easy to see that this does not change the meaning of the program, and can be easily automated. I obtain:
f(_, X, Y) :-
X = 1,
Y = s(1).
f(Arg, X, Y) :-
Arg = 1,
X = s(X0),
Y = s(s(Y0)),
f(Arg, X0, Y0).
f(X, Y, T) :-
X = s(X0),
Y = s(Y0),
f(X, Y0, Z),
f(X0, Z, T).
I leave it as an easy exercise to show that this is declaratively equivalent to the original program.
The abstraction
The abstraction I use is the following: Instead of reasoning over the concrete terms 1, s(1), s(s(1)) etc., I use the atom d for each term T for which I can prove that d(T) holds.
Let me show you what I mean by the following interpretation of unification:
interpret(d = N) :- d(N).
This says:
If d(N) holds, then N is to be regarded identical to the atom d, which, as we said, shall denote any term for which d/1 holds.
Note that this differs significantly from what an actual unification between concrete terms d and N means! For example, we obtain:
?- interpret(X = s(s(1))).
X = d.
Pretty strange, but I hope you can get used to it.
Extending the abstraction
Of course, interpreting a single unification is not enough to reason about this program, since it also contains additional language elements.
I therefore extend the abstract interpretation to:
conjunction
calls of f/3.
Interpreting conjunctions is easy, but what about f/3?
Incremental derivations
If, during abstract interpretation, we encounter the goal f(X, Y, Z), then we know the following: In principle, the arguments can of course be unified with any terms for which the goal succeeds. So we keep track of those arguments for which we know the query can succeed in principle.
We thus equip the predicate with an additional argument: A list of f/3 goals that are logical consequences of the program.
In addition, we implement the following very important provision: If we encounter a unification that cannot be safely interpreted in abstract terms, then we throw an error instead of failing silently. This may for example happen if the unification would fail when regarded as an abstract interpretation although it would succeed as a concrete unification, or if we cannot fully determine whether the arguments are of the intended domain. The primary purpose of this provision is to avoid unintentional elimination of actual solutions due to oversights in the abstract interpreter. This is the most critical aspect in the interpreter, and any proof-theoretic mechanism will face closely related questions (how can we ensure that no proofs are missed?).
Here it is:
interpret(Var = N, _) :-
must_be(var, Var),
must_be(ground, N),
d(N),
Var = d.
interpret((A,B), Ds) :-
interpret(A, Ds),
interpret(B, Ds).
interpret(f(A, B, C), Ds) :-
member(f(A, B, C), Ds).
Quis custodiet ipsos custodes?
How can we tell whether this is actually correct? That's the tough part! In fact, it turns out that the above is not sufficient to be certain to catch all cases, because it may simply fail if d(N) does not hold. It is obviously not acceptable for the abstract interpreter to fail silently for cases it cannot handle. So we need at least one more clause:
interpret(Var = N, _) :-
must_be(var, Var),
must_be(ground, N),
\+ d(N),
domain_error(d, N).
In fact, an abstract interpreter becomes a lot less error-prone when we reason about ground terms, and so I will use the atom any to represent "any term at all" in derived answers.
Over this domain, the interpretation of unification becomes:
interpret(Var = N, _) :-
must_be(ground, N),
( var(Var) ->
( d(N) -> Var = d
; N = s(d) -> Var = d
; N = s(s(d)) -> Var = d
; domain_error(d, N)
)
; Var == any -> true
; domain_error(any, Var)
).
In addition, I have implemented further cases of the unification over this abstract domain. I leave it as an exercise to ponder whether this correctly models the intended semantics, and to implement further cases.
As it will turn out, this definition suffices to answer the posted question. However, it clearly leaves a lot to be desired: It is more complex than we would like, and it becomes increasingly harder to tell whether we have covered all cases. Note though that any proof-theoretic approach will face closely corresponding issues: The more complex and powerful it becomes, the harder it is to tell whether it is still correct.
All derivations: See you at the fixpoint!
It now remains to deduce everything that follows from the original program.
Here it is, a simple fixpoint computation:
derivables(Ds) :-
functor(Head, f, 3),
findall(Head-Body, clause(Head, Body), Clauses),
derivables_fixpoint(Clauses, [], Ds).
derivables_fixpoint(Clauses, Ds0, Ds) :-
findall(D, clauses_derivable(Clauses, Ds0, D), Ds1, Ds0),
term_variables(Ds1, Vs),
maplist(=(any), Vs),
sort(Ds1, Ds2),
( same_length(Ds2, Ds0) -> Ds = Ds0
; derivables_fixpoint(Clauses, Ds2, Ds)
).
clauses_derivable(Clauses, Ds0, Head) :-
member(Head-Body, Clauses),
interpret(Body, Ds0).
Since we are deriving ground terms, sort/2 removes duplicates.
Example query:
?- derivables(Ds).
ERROR: Arguments are not sufficiently instantiated
Somewhat anticlimactically, the abstract interpreter is unable to process this program!
Commutativity to the rescue
In a proof-theoretic approach, we search for, well, proofs. In an interpreter-based approach, we can either improve the interpreter or apply algebraic laws to transform the source program in a way that preserves essential properties.
In this case, I will do the latter, and leave the former as an exercise. Instead of searching for proofs, we are searching for equivalent ways to write the program so that our interpreter can derive the desired properties. For example, I now use commutativity of conjunction to obtain:
f(_, X, Y) :-
X = 1,
Y = s(1).
f(Arg, X, Y) :-
Arg = 1,
f(Arg, X0, Y0),
X = s(X0),
Y = s(s(Y0)).
f(X, Y, T) :-
f(X, Y0, Z),
f(X0, Z, T),
X = s(X0),
Y = s(Y0).
Again, I leave it as an exercise to carefully check that this program is declaratively equivalent to your original program.
iamque opus exegi, because:
?- derivables(Ds).
Ds = [f(any, d, d)].
This shows that in each solution of f/3, the last two arguments are always terms for which d/1 holds! In particular, it also holds for the sample arguments you posted, even if there is no hope to ever actually compute the concrete terms!
Conclusion
By abstract interpretation, we have shown:
for all X where f(_, _, X) holds, d(X) also holds
beyond that, for all Y where f(_, Y, _) holds, d(Y) also holds.
The question only asked for a special case of the first property. We have shown significantly more!
In summary:
If f(_, Y, X) holds, then d(X) holds and d(Y) holds.
Prolog makes it comparatively easy and convenient to reason about Prolog programs. This often allows us to derive interesting properties of Prolog programs, such as termination properties and type information.
Please see Reasoning about programs for references and more explanation.
+1 for a great question and reference.

How to check if a variable is instantiated in Mercury

I am a complete beginner in Mercury language, although I have learned Prolog before. One of the new aspects of Mercury is dererminism. The main function has to be deterministic. In order to make it so, I have to check if a variable is unified/bound to a value, but I cannot find how to do that. Particularly see the code:
main(!IO) :-
mother(X,"john"),
( if bound(X) <-----this is my failed try; how to check if X is unified?
then
io.format("%s\n", [s(X)], !IO)
else
io.write_string("Not available\n",!IO)
).
Such main could not fail, i.e. it would (I guess) satisfy the deterministic constraint. So the question is how to check if a variable is bound.
I've translated the family tree from a Prolog example found on this side for comparison.
I have specified all facts (persons) and their relationship with each other and a few helper predicates.
Note that this typed version did actually find the error that you see in the top answer: female(jane).
The main predicate does not have to be deterministic, it can also be cc_multi, which means Mercury will commit (not try other) choices it has;
you can verify this by replacing mother with parent.
You also do not have to check for boundness of your variables, instead you just use any not deterministic term in the if clause, and on succeeding the then part will have guaranteed bound variables, or unbound in the else part.
If you want this example to be more dynamic, you will have to use the lexer or term module to parse input into a person atom.
If you want all solutions you should check the solution module.
%-------------------------------%
% vim: ft=mercury ff=unix ts=4 sw=4 et
%-------------------------------%
% File: relationship.m
%-------------------------------%
% Classical example of family relationship representation,
% based on: https://stackoverflow.com/questions/679728/prolog-family-tree
%-------------------------------%
:- module relationship.
:- interface.
:- import_module io.
%-------------------------------%
:- pred main(io::di, io::uo) is cc_multi.
%-------------------------------%
%-------------------------------%
:- implementation.
:- type person
---> john
; bob
; bill
; ron
; jeff
; mary
; sue
; nancy
; jane
.
:- pred person(person::out) is multi.
person(Person) :- male(Person).
person(Person) :- female(Person).
:- pred male(person).
:- mode male(in) is semidet.
:- mode male(out) is multi.
male(john).
male(bob).
male(bill).
male(ron).
male(jeff).
:- pred female(person).
:- mode female(in) is semidet.
:- mode female(out) is multi.
female(mary).
female(sue).
female(nancy).
female(jane).
:- pred parent(person, person).
:- mode parent(in, in) is semidet.
:- mode parent(in, out) is nondet.
:- mode parent(out, in) is nondet.
:- mode parent(out, out) is multi.
parent(mary, sue).
parent(mary, bill).
parent(sue, nancy).
parent(sue, jeff).
parent(jane, ron).
parent(john, bob).
parent(john, bill).
parent(bob, nancy).
parent(bob, jeff).
parent(bill, ron).
:- pred mother(person, person).
:- mode mother(in, in) is semidet.
:- mode mother(in, out) is nondet.
:- mode mother(out, in) is nondet.
:- mode mother(out, out) is nondet.
mother(Mother, Child) :-
female(Mother),
parent(Mother, Child).
:- pred father(person, person).
:- mode father(in, in) is semidet.
:- mode father(in, out) is nondet.
:- mode father(out, in) is nondet.
:- mode father(out, out) is nondet.
father(Father, Child) :-
male(Father),
parent(Father, Child).
%-------------------------------%
main(!IO) :-
Child = john, % try sue or whatever for the first answer
( if mother(Mother, Child) then
io.write(Mother, !IO),
io.print(" is ", !IO),
io.write(Child, !IO),
io.print_line("'s mother", !IO)
else
io.write(Child, !IO),
io.print_line("'s mother is unknown", !IO)
).
%-------------------------------%
:- end_module relationship.
%-------------------------------%
You do not need to check variables for their instantiation state. I have never done it in almost 10 years of using Mercury on a daily basis. For each predicate and predicate's mode, Mercury knows statically the instantiation of each variable at each point in the program. So using your example:
% I'm making some assumptions here.
:- pred mother(string::out, string::in) is det.
main(!IO) :-
mother(X,"john"),
io.format("%s\n", [s(X)], !IO).
The declaration for mother says that its first argument is an output argument, this means that after the call to mother its value will be ground. and so it can be printed. If you did use a var predicate (and there is one in the standard library) it would always fail.
To do this Mercury places other requirements on the programmer. For example.
(
X = a,
Y = b
;
X = c
)
io.write(Y, !IO)
The above code is illegal. Because Y is produced in the first case, but not in the second so it's groundness is not well defined. The compiler also knows that the disjunction is a switch (when X is already ground) because only one disjunct can possibly be true. so there it produces only one answer.
Where this static groundness can become tricky is when you have a multi-moded predicate. Mercury may need to re-order your conjunctions to make the program mode correct: eg to put a use of a variable after it's production. Nevertheless the uses and instantiation states of variables will always be statically known.
I understand this is really quite different from Prolog, and may require a fair amount of unlearning.
Hope this helps, all the best.
main(!IO) :-
mother(X,"john"),
( if bound(X) <-----this is my failed try; how to check if X is unified?
then
io.format("%s\n", [s(X)], !IO)
else
io.write_string("Not available\n",!IO)
).
This code doesn't make a lot of sense in Mercury. If X is a standard output variable from mother, it will never succeed with X unbound.
If mother is deterministic (det) in this mode, it will always give you a single value for X. In that case there would be no need to check anything; mother(X, "john") gives you John's mother, end of story, so there's no need for the "Not available case".
Since you're trying to write cases to handle both when mother gives you something and when it doesn't, I'm assuming mother is not deterministic. If it's semidet then there are two possibilities; it succeeds with X bound to something, or it fails. It will not succeed with X unbound.
Your code doesn't say anything about how main (which is required to always succeed) can succeed if mother fails. Remember that the comma is logical conjunction (and). Your code says "main succeeds if mother(X, "john") succeeds AND (if .. then ... else ...) succeeds". But if mother fails, then what? The compiler would reject your code for this reason, even if the rest of your code was valid.
But the if ... then ... else ... construct is exactly designed to allow you to check a goal that may succeed or fail, and specify a case for when the goal succeeds and a case for when it fails, such that the whole if/then/else always succeeds. So all you need to do is put mother(X, "john") in the condition of the if/then/else.
In Mercury you don't switch on variables that are either bound or unbound. Instead just check whether the goal that might have bound the variable succeeded or failed.

Collect all "minimum" solutions from a predicate

Given the following facts in a database:
foo(a, 3).
foo(b, 2).
foo(c, 4).
foo(d, 3).
foo(e, 2).
foo(f, 6).
foo(g, 3).
foo(h, 2).
I want to collect all first arguments that have the smallest second argument, plus the value of the second argument. First try:
find_min_1(Min, As) :-
setof(B-A, foo(A, B), [Min-_|_]),
findall(A, foo(A, Min), As).
?- find_min_1(Min, As).
Min = 2,
As = [b, e, h].
Instead of setof/3, I could use aggregate/3:
find_min_2(Min, As) :-
aggregate(min(B), A^foo(A, B), Min),
findall(A, foo(A, Min), As).
?- find_min_2(Min, As).
Min = 2,
As = [b, e, h].
NB
This only gives the same results if I am looking for the minimum of a number. If an arithmetic expression in involved, the results might be different. If a non-number is involved, aggregate(min(...), ...) will throw an error!
Or, instead, I can use the full key-sorted list:
find_min_3(Min, As) :-
setof(B-A, foo(A, B), [Min-First|Rest]),
min_prefix([Min-First|Rest], Min, As).
min_prefix([Min-First|Rest], Min, [First|As]) :-
!,
min_prefix(Rest, Min, As).
min_prefix(_, _, []).
?- find_min_3(Min, As).
Min = 2,
As = [b, e, h].
Finally, to the question(s):
Can I do this directly with library(aggregate)? It feels like it should be possible....
Or is there a predicate like std::partition_point from the C++ standard library?
Or is there some easier way to do this?
EDIT:
To be more descriptive. Say there was a (library) predicate partition_point/4:
partition_point(Pred_1, List, Before, After) :-
partition_point_1(List, Pred_1, Before, After).
partition_point_1([], _, [], []).
partition_point_1([H|T], Pred_1, Before, After) :-
( call(Pred_1, H)
-> Before = [H|B],
partition_point_1(T, Pred_1, B, After)
; Before = [],
After = [H|T]
).
(I don't like the name but we can live with it for now)
Then:
find_min_4(Min, As) :-
setof(B-A, foo(A, B), [Min-X|Rest]),
partition_point(is_min(Min), [Min-X|Rest], Min_pairs, _),
pairs_values(Min_pairs, As).
is_min(Min, Min-_).
?- find_min_4(Min, As).
Min = 2,
As = [b, e, h].
What is the idiomatic approach to this class of problems?
Is there a way to simplify the problem?
Many of the following remarks could be added to many programs here on SO.
Imperative names
Every time, you write an imperative name for something that is a relation you will reduce your understanding of relations. Not much, just a little bit. Many common Prolog idioms like append/3 do not set a good example. Think of append(As,As,AsAs). The first argument of find_min(Min, As) is the minimum. So minimum_with_nodes/2 might be a better name.
findall/3
Do not use findall/3 unless the uses are rigorously checked, essentially everything must be ground. In your case it happens to work. But once you generalize foo/2 a bit, you will lose. And that is frequently a problem: You write a tiny program ; and it seems to work.
Once you move to bigger ones, the same approach no longer works. findall/3 is (compared to setof/3) like a bull in a china shop smashing the fine fabric of shared variables and quantification. Another problem is that accidental failure does not lead to failure of findall/3 which often leads to bizarre, hard to imagine corner cases.
Untestable, too specific program
Another problem is somewhat related to findall/3, too. Your program is so specific, that it is quite improbable that you will ever test it. And marginal changes will invalidate your tests. So you will soon give up to perform testing. Let's see what is specific: Primarily the foo/2 relation. Yes, only an example. Think of how to set up a test configuration where foo/2 may change. After each change (writing a new file) you will have to reload the program. This is so complex, chances are you will never do it. I presume you do not have a test harness for that. Plunit for one, does not cover such testing.
As a rule of thumb: If you cannot test a predicate on the top level you never will. Consider instead
minimum_with(Rel_2, Min, Els)
With such a relation, you can now have a generalized xfoo/3 with an additional parameter, say:
xfoo(o, A,B) :-
foo(A,B).
xfoo(n, A,B) :-
newfoo(A,B).
and you most naturally get two answers for minimum_with(xfoo(X), Min, Els). Would you have used findall/3 instead of setof/3 you already would have serious problems. Or just in general: minmum_with(\A^B^member(A-B, [x-10,y-20]), Min, Els). So you can play around on the top level and produce lots of interesting test cases.
Unchecked border cases
Your version 3 is clearly my preferred approach, however there are still some parts that can be improved. In particular, if there are answers that contain variables as a minimum. These should be checked.
And certainly, also setof/3 has its limits. And ideally you would test them. Answers should not contain constraints, in particular not in the relevant variables. This shows how setof/3 itself has certain limits. After the pioneering phase, SICStus produced many errors for constraints in such cases (mid 1990s), later changed to consequently ignoring constraints in built-ins that cannot handle them. SWI on the other hand does entirely undefined things here. Sometimes things are copied, sometimes not. As an example take:
setof(A, ( A in 1..3 ; A in 3..5 ), _) and setof(t, ( A in 1..3 ; A in 3.. 5 ), _).
By wrapping the goal this can be avoided.
call_unconstrained(Goal_0) :-
call_residue_vars(Goal_0, Vs),
( Vs = [] -> true ; throw(error(representation_error(constraint),_)) ).
Beware, however, that SWI has spurious constraints:
?- call_residue_vars(all_different([]), Xs).
Xs = [_A].
Not clear if this is a feature in the meantime. It has been there since the introduction of call_residue_vars/2 about 5 years ago.
I don't think that library(aggregate) covers your use case. aggregate(min) allows for one witness:
min(Expr, Witness)
A term min(Min, Witness), where Min is the minimal version of Expr over all solutions, and Witness is any other template applied to solutions that produced Min. If multiple solutions provide the same minimum, Witness corresponds to the first solution.
Some time ago, I wrote a small 'library', lag.pl, with predicates to aggregate with low overhead - hence the name (LAG = Linear AGgregate). I've added a snippet, that handles your use case:
integrate(min_list_associated, Goal, Min-Ws) :-
State = term(_, [], _),
forall(call(Goal, V, W), % W stands for witness
( arg(1, State, C), % C is current min
arg(2, State, CW), % CW are current min witnesses
( ( var(C) ; V #< C )
-> U = V, Ws = [W]
; U = C,
( C == V
-> Ws = [W|CW]
; Ws = CW
)
),
nb_setarg(1, State, U),
nb_setarg(2, State, Ws)
)),
arg(1, State, Min), arg(2, State, Ws).
It's a simple minded extension of integrate(min)...
The comparison method it's surely questionable (it uses less general operator for equality), could be worth to adopt instead a conventional call like that adopted for predsort/3. Efficiency wise, still better would be to encode the comparison method as option in the 'function selector' (min_list_associated in this case)
edit thanks #false and #Boris for correcting the bug relative to the state representation. Calling nb_setarg(2, State, Ws) actually changes the term' shape, when State = (_,[],_) was used. Will update the github repo accordingly...
Using library(pairs) and [sort/4], this can be simply written as:
?- bagof(B-A, foo(A, B), Ps),
sort(1, #=<, Ps, Ss), % or keysort(Ps, Ss)
group_pairs_by_key(Ss, [Min-As|_]).
Min = 2,
As = [b, e, h].
This call to sort/4 can be replaced with keysort/2, but with sort/4 one can also find for example the first arguments associated with the largest second argument: just use #>= as the second argument.
This solution is probably not as time and space efficient as the other ones, but may be easier to grok.
But there is another way to do it altogether:
?- bagof(A, ( foo(A, Min), \+ ( foo(_, Y), Y #< Min ) ), As).
Min = 2,
As = [b, e, h].

Formulation in Prolog

I currently have the following problem, that I want to solve with Prolog. It's an easy example, that would be easy to solve in Java/C/whatever. My problem is that I believe to be too tied to Java's thinking to actually formulate the problem in a way that makes useof Prolog's logic power.
The problem is..
I have a set of 6 arrows, either pointing left or right. Let's assume that they are in the following starting configuration:
->
<-
->
<-
->
<-
Now, I can switch two arrows as long as they are next to each other. My goal is to discover which sequence of actions will make the initial configuration of arrows turn into
<-
<-
<-
->
->
->
My initial attempt at formulating the problem is..
right(arrow_a).
left(arrow_b).
right(arrow_c).
left(arrow_d).
right(arrow_e).
left(arrow_f).
atPosition(1, arrow_a).
atPosition(2, arrow_b).
atPosition(3, arrow_c).
atPosition(4, arrow_d).
atPosition(5, arrow_e).
atPosition(6, arrow_f).
This will tell Prolog what the initial configuration of the arrows are. But now how do I insert aditional logic in it? How to implement, for example, switchArrows(Index) ? Is it even correct stating the initial conditions like this, in Prolog? Won't it interfere later when I try to set, for example, that arrow_a is at position 6, atPosition(6, arrow_a) ?
Your problem can be formulated as a sequence of transitions between configurations. First think about how you want to represent a single configuration. You could use a list to do this, for example [->,<-,->,<-,->,<-] to represent the initial configuration. A single move could be described with a relation step/2 that is used as step(State0, State) and describes the relation between two configurations that are "reachable" from each other by flipping two adjacent arrows. It will in general be nondeterministic. Your main predicate then describes a sequence of state transitions that lead to the desired target state from an initial state. Since you want to describe a list (of configurations), DCGs are a good fit:
solution(State0, Target) -->
( { State0 == Target } -> []
; { step(State0, State1) },
[State1],
solution(State1, Target)
).
And then use iterative deepening to find a solution if one exists, as in:
?- length(Solution, _), phrase(solution([->,<-,->,<-,->,<-], [<-,<-,<-,->,->,->]), Solution).
The nice thing is that Prolog automatically backtracks once all sequences of a given length have been tried and the target state could not yet be reached. You only have to implement step/2 now and are done.
Since a complete solution is posted already, here is mine:
solution(State0, Target) -->
( { State0 == Target } -> []
; { step(State0, State1) },
[State1],
solution(State1, Target)
).
flip(->, <-).
flip(<-, ->).
step([], []).
step([A|Rest0], [A|Rest]) :- step(Rest0, Rest).
step([A0,A1|Rest], [B0,B1|Rest]) :- flip(A0, B0), flip(A1, B1).
Example query:
?- length(Solution, _), phrase(solution([->,<-,->,<-,->,<-], [<-,<-,<-,->,->,->]), Solution).
Solution = [[->, <-, ->, <-, <-, ->],
[->, <-, ->, ->, ->, ->],
[->, ->, <-, ->, ->, ->],
[<-, <-, <-, ->, ->, ->]].
Since iterative deepening is used, we know that no shorter solution (less than 4 steps) is possible.
I also have a general comment on what you said:
It's an easy example, that would be
easy to solve in Java/C/whatever. My
problem is that I believe to be too
tied to Java's thinking to actually
formulate the problem in a way that
makes useof Prolog's logic power.
Personally, I think this example is already much more than could be expected from a beginning, say, Java programmer. Please try to solve this problem in Java/C/whatever and see how far you get. In my experience, when students say they are "too tied to Java's thinking" etc., they cannot solve the problem in Java either. Prolog is different, but not that different that, if you had a clear idea of how to solve it in Java, could not translate it quite directly to Prolog. My solution uses the built-in search mechanism of Prolog, but you do not have to: You could implement the search yourself just as you would in Java.
Here is my solution:
solution(Begin, End, PrevSteps, [Step | Steps]) :-
Step = step(Begin, State1),
Step,
forall(member(step(S, _), PrevSteps),
State1 \= S
), % prevent loops
( State1 == End
-> Steps = []
; solution(State1, End, [Step | PrevSteps], Steps)
).
rev(->,<-).
rev(<-,->).
step([X,Y|T], [XX,YY|T]) :- rev(X,XX), rev(Y, YY).
step([A,X,Y|T], [A,XX,YY|T]) :- rev(X,XX), rev(Y, YY).
step([A,B,X,Y|T], [A,B,XX,YY|T]) :- rev(X,XX), rev(Y, YY).
step([A,B,C,X,Y|T], [A,B,C,XX,YY|T]) :- rev(X,XX), rev(Y, YY).
step([A,B,C,D,X,Y], [A,B,C,D,XX,YY]) :- rev(X,XX), rev(Y, YY).
run :-
solution([->,<-,->,<-,->,<-], [<-,<-,<-,->,->,->],[],Steps),
!,
forall(member(Step,Steps),writeln(Step)).
It finds only first solution of all possible, though I suppose the solution found is not optimal, but rather first working one.
Managed to transform mat's code to mercury:
:- module arrows.
:- interface.
:- import_module io.
:- pred main(io, io).
:- mode main(di, uo) is cc_multi.
:- implementation.
:- import_module list, io, int.
:- type arrow ---> (->); (<-).
:- mode solution(in, in, in, in, out, in) is cc_nondet.
solution(State0, Target, MaxDepth, CurrentDepth) -->
{CurrentDepth =< MaxDepth},
( { State0 = Target } -> []
; { step(State0, State1) },
[State1],
solution(State1, Target, MaxDepth, CurrentDepth + 1)
).
flip(->, <-).
flip(<-, ->).
step([], []).
step([A|Rest0], [A|Rest]) :- step(Rest0, Rest).
step([A0,A1|Rest], [B0,B1|Rest]) :- flip(A0, B0), flip(A1, B1).
main -->
(({
member(N, 1..10),
solution([->,<-,->,<-,->,<-], [<-,<-,<-,->,->,->], N, 0, Solution, [])
})
-> print(Solution)
; print("No solutions")
).
Compiling with
mmc --infer-all arrows.m
to infer signatures & determinism

Resources