I understand that some Prologs support dictionary-like associative data structures out of the box. For the implementations that do, do they support some notion of partial unification with another structure that doesn't actually contain all of the keys?
For example, in the syntax of core.logic/miniKanren:
(run* [q]
(== {:foo 1 :bar 2} (partial-map :foo q)))
This would return a single result where q is bound to 1.
Do Prologs give this operation or this partial structure a name?
In general one works around the poor selection of fundamental data types in Prolog the standard way: by adding libraries and using interfaces. SWI-Prolog, for example, comes with the assoc library that implements an AVL tree-based association data structure. (As an aside, balanced trees are more common in functional and logic programming than hash tables because it's easier to create "persistent" data structures on trees than hash tables—persistent in the FP sense of sharing internal structure.)
Using this library looks something like this:
?- [library(assoc)].
% library(assoc) compiled into assoc 0.00 sec, 97 clauses
true.
?- empty_assoc(Assoc).
Assoc = t.
?- empty_assoc(Assoc), get_assoc(test, Assoc, V).
false.
?- empty_assoc(Assoc), put_assoc(test, Assoc, foo, Assoc2).
Assoc = t,
Assoc2 = t(test, foo, -, t, t).
?- empty_assoc(Assoc),
put_assoc(test, Assoc, foo, Assoc2),
get_assoc(test, Assoc2, Value).
Assoc = t,
Assoc2 = t(test, foo, -, t, t),
Value = foo.
Once you have something that gives you an interface like this, you can define all kinds of logical relations on top of it. Once you have logical relations, Prolog's normal unification machinery will take care of the rest—no special support for this or that data type is required. Based on your requirements, I think what you want is like a subset relation, except checking both that all of one association are in the other and they all have the same value. I guess that would look something like this:
association_subset(Left, Right) :-
forall(gen_assoc(Assoc, Left, Value), get_assoc(Assoc, Right, Value)).
This predicate will only be true if the Left association is a subset of the Right association, as defined above. We can test it and see if it's doing what we want:
simple(Assoc) :-
empty_assoc(Empty),
put_assoc(foo, Empty, foo_test, V1),
put_assoc(bar, V1, bar_test, Assoc).
complex(Assoc) :-
simple(Assoc1),
put_assoc(baz, Assoc1, bazzle, Assoc).
unrelated(Assoc) :-
empty_assoc(Empty),
put_assoc(baz, Empty, bazzle, Assoc).
...
?- simple(X), complex(Y), association_subset(X, Y).
X = t(foo, foo_test, <, t(bar, bar_test, -, t, t), t),
Y = t(baz, bazzle, -, t(bar, bar_test, -, t, t), t(foo, foo_test, -, t, t)).
?- simple(X), simple(Y), association_subset(X, Y).
X = Y, Y = t(foo, foo_test, <, t(bar, bar_test, -, t, t), t).
?- simple(X), unrelated(Y), association_subset(X, Y).
false.
?- complex(X), simple(Y), association_subset(X, Y).
false.
We can translate this to your exact question like so:
left(Assoc) :-
empty_assoc(Empty),
put_assoc(foo, Empty, 1, Assoc).
right(Assoc) :-
left(Assoc1),
put_assoc(bar, Assoc1, 2, Assoc).
?- left(L), right(R), association_subset(L, R), get_assoc(foo, L, Q).
L = t(foo, 1, -, t, t),
R = t(foo, 1, <, t(bar, 2, -, t, t), t),
Q = 1.
I realize that this answer doesn't really answer the question you asked, but I hope it answers the question beneath the question. In other words, there doesn't need to be special support for these data structures—the above predicate could be defined over association lists as well, you can see that all you'd need is the usual ways of making empty associations, adding, testing for, and generating the keys/values of the association and the underlying data structure becomes irrelevant. No special support is necessary either data-structure-wise or unification-wise. Special syntax would certainly make it nicer to look at! But it isn't necessary to get the behavior you desire.
Some Prolog systems such as Eclipse have a record notation. This can be used
when you know in advance the possible keys of your map. But it needs
a type declaration. The record notation is also found in Prolog decendant
languages such as Erlang.
The idea is very simple. You first declare
a record type (some syntax invented here):
:- rectype T{K1,...,Kn}.
Now you can use inside your Prolog program
records, just write (again some syntax invented here):
... T{F1 = V1, .., Fn = Vm} ...
At compile type the record will be converted into a compound
and can then easily be used in normal unification. The conversion
reorders the key value pairs according to the record type
declaration, then drops the keys and uses the positions only.
Unused positions are replaced by annonymous variables or
by default values if the record type declaration also
covers this.
... T(W1, ..., Wn) ...
Your example would work as follows:
:- rectype myrec{foo, bar}
?- myrec{foo=1,bar=2} = myrec{foo=q}
The latter query would be internally executed as:
?- myrec(1,2) = myrec(q,_).
For more details how Eclipse does it, see for example here:
http://www.eclipseclp.org/doc/bips/kernel/syntax/struct-1.html
For dynamic maps where the keyset is not static you can implement
dynamic data structures as the other post about SWI-Prolog AVL
trees shows. Or ask your Prolog system for a handle to a specific
data structure. Implement these with a FFI (Foreign Function Interface)
or access these which are already bundled with the Prolog system.
Eclipse for example bundles a couple, see "Description" section
in the below article:
http://www.eclipseclp.org/doc/bips/kernel/record/index.html
Bye
It is not that clear to me what you actually want, (you have removed the hashing aspect), but maybe you rather want feature terms or feature structures?
They are popular to linguists and have been part of Life.
It is possible to implement them with the help of attributed variables, but so far, I have not seen much of a demand for them.
You can also simulate them a bit clumsily with syntactic unification. It is clumsy because you need to represent each feature with a separate argument (you can do this slightly better, but it is also more complex then). So if your program contains n features, a feature structure will contain n different arguments most of which will never be touched. However, unification will then work directly as intended.
Related
I have a predicate that takes two arguments in which the first argument can be a compound one and the second argument is always B. I have also defined some new operators like + and &.
pvc(A, B) :- somestuff(A, B).
Here the user may type something like pvc((x+y)&(A+b), B).
As a beginner in Prolog, what I want to do is to convert the compound A to all lowercase and call somestuff with new AN. So it shall be somestuff((x+y)&(a+b), B).
I tried something like pvc(A, B) :- downcase_atom(A,AN),somestuff(AN, B). But it doesn't seem like to be the correct way. I will appreciate any help.
So, you will need to induct on the structure of your thing, and the easiest way to do this is with the univ operator =../2. First handle your base case, which you did:
downcase_compound(A, B) :- atom(A), downcase_atom(A, B).
Now you will take the structure apart and induct on it using univ:
downcase_compound(A, B) :-
compound(A),
A =.. [Functor|Args],
downcase_compound(Functor, DowncaseFunctor),
maplist(downcase_compound, Args, DowncaseArgs),
B =.. [DowncaseFunctor|DowncaseArgs].
The trick here is simply breaking your compound down into bits you can use, and then recursively calling downcase_compound/2 on those bits. See it in action:
?- downcase_compound((x+y,('A'+b)), X).
X = (x+y, a+b)
Hello good people of programming .
Logic programming is always fascinating compare to imperative programming.
As pursuing unknown of logic programming, there is some problems encountering arithmetic expressions.
Here is the code I have done so far.
number_atom(N) :-
(number(N) -> functor(N, _, _); functor(N, _, _), atom(N)).
arithmeticAdd_expression(V,V,Val,Val).
arithmeticAdd_expression(N, _Var, _Val, N) :-
number_atom(N).
arithmeticAdd_expression(X+Y, Var, Val, R) :-
arithmeticAdd_expression(X, Var, Val, RX),
arithmeticAdd_expression(Y, Var, Val, RY),
(number(RX), number(RY) -> R is RX + RY; R = RX + RY).
Taking add operation as example:
arithmeticAdd_expression(Expression, Variable, Value, Result)
?- arithmeticAdd_expression(a+10, a, 1, Result).
?- Result = 11;
?- Result = a + 10.
?- arithmeticAdd_expression(a+10, b, 1, Result).
?- Result = a + 10.
What I would like to achieve is that
if the atom(s) in the Expression can only be substituted by given Variable and value, then Result is the number only like the example shown above(Result = 11). Else, the Result is the Expression itself only. My problem with the code is somewhere there, I just could figure it out. So, Please someone can help me? Thank you.
An important attraction of logic programming over, say, functional programming is that you can often use the same code in multiple directions.
This means that you can ask not only for a particular result if the inputs are given, but also ask how solutions look like in general.
However, for this to work, you have to put some thought into the way you represent your data. For example, in your case, any term in your expression that is still a logical variable may denote either a given number or an atom that should be interpreted differently than a plain number or an addition of two other terms. This is called a defaulty representation because you have to decide what a variable should denote by default, and there is no way to restrict its meaning to only one of the possible cases.
Therefore, I suggest first of all to change the representation so that you can symbolically distinguish the two cases. For example, to represent expressions in your case, let us adopt the convention that:
atoms are denoted by the wrapper a/1
numbers are denoted by the wrapper n/1.
and as is already the case, (+)/2 shall denote addition of two expressions.
So, a defaulty term like b+10 shall now be written as: a(b)+n(10). Note the use of the wrappers a/1 and n/1 to make clear which case we are dealing with. Such a representation is called clean. The wrappers are arbitrarily (though mnemonically) chosen, and we could have used completely different wrappers such as atom/1 and number/1, or atm/1 and nmb/1. The key property is only that we can now symbolically distinguish different cases by virtue of their outermost functor and arity.
Now the key advantage: Using such a convention, we can write for example: a(X)+n(Y). This is a generalization of the earlier term. However, it carries a lot more information than only X+Y, because in the latter case, we have lost track of what these variables stand for, while in the former case, this distinction is still available.
Now, assuming that this convention is used in expressions, it becomes straight-forward to describe the different cases:
expression_result(n(N), _, _, n(N)).
expression_result(a(A), A, N, n(N)).
expression_result(a(A), Var, _, a(A)) :-
dif(A, Var).
expression_result(X+Y, Var, Val, R) :-
expression_result(X, Var, Val, RX),
expression_result(Y, Var, Val, RY),
addition(RX, RY, R).
addition(n(X), n(Y), n(Z)) :- Z #= X + Y.
addition(a(X), Y, a(X)+Y).
addition(X, a(Y), X+a(Y)).
Note that we can now use pattern matching to distinguish the cases. No more if-then-elses, and no more atom/1 or number/1 tests are necessary.
Your test cases work as expected:
?- expression_result(a(a)+n(10), a, 1, Result).
Result = n(11) ;
false.
?- expression_result(a(a)+n(10), b, 1, Result).
Result = a(a)+n(10) ;
false.
And now the key advantage: With such a pure program (please see logical-purity for more information), we can also ask "What do results look like in general?"
?- expression_result(Expr, Var, N, R).
Expr = R, R = n(_1174) ;
Expr = a(Var),
R = n(N) ;
Expr = R, R = a(_1698),
dif(_1698, Var) ;
Expr = n(_1852)+n(_1856),
R = n(_1896),
_1852+_1856#=_1896 ;
Expr = n(_2090)+a(Var),
R = n(_2134),
_2090+N#=_2134 .
Here, I have used logical variables for all arguments, and I get quite general answers from this program. This is why I have used clpfd constraints for declarative integer arithmetic.
Thus, your immediate issue can be readily solved by using a clean representation, and using the code above.
Only one very small challenge remains: Maybe you actually want to use a defaulty representation such as c+10 (instead of a(c)+n(10)). The task you are then facing is to convert the defaulty representation to a clean one, for example via a predicate defaulty_clean/2. I leave this as an easy exercise. Once you have a clean representation, you can use the code above without changes.
min_member(-Min, +List)
True when Min is the smallest member in the standard order of terms. Fails if List is empty.
?- min_member(3, [1,2,X]).
X = 3.
The explanation is of course that variables come before all other terms in the standard order of terms, and unification is used. However, the reported solution feels somehow wrong.
How can it be justified? How should I interpret this solution?
EDIT:
One way to prevent min_member/2 from succeeding with this solution is to change the standard library (SWI-Prolog) implementation as follows:
xmin_member(Min, [H|T]) :-
xmin_member_(T, H, Min).
xmin_member_([], Min0, Min) :-
( var(Min0), nonvar(Min)
-> fail
; Min = Min0
).
xmin_member_([H|T], Min0, Min) :-
( H #>= Min0
-> xmin_member_(T, Min0, Min)
; xmin_member_(T, H, Min)
).
The rationale behind failing instead of throwing an instantiation error (what #mat suggests in his answer, if I understood correctly) is that this is a clear question:
"Is 3 the minimum member of [1,2,X], when X is a free variable?"
and the answer to this is (to me at least) a clear "No", rather than "I can't really tell."
This is the same class of behavior as sort/2:
?- sort([A,B,C], [3,1,2]).
A = 3,
B = 1,
C = 2.
And the same tricks apply:
?- min_member(3, [1,2,A,B]).
A = 3.
?- var(B), min_member(3, [1,2,A,B]).
B = 3.
The actual source of confusion is a common problem with general Prolog code. There is no clean, generally accepted classification of the kind of purity or impurity of a Prolog predicate. In a manual, and similarly in the standard, pure and impure built-ins are happily mixed together. For this reason, things are often confused, and talking about what should be the case and what not, often leads to unfruitful discussions.
How can it be justified? How should I interpret this solution?
First, look at the "mode declaration" or "mode indicator":
min_member(-Min, +List)
In the SWI documentation, this describes the way how a programmer shall use this predicate. Thus, the first argument should be uninstantiated (and probably also unaliased within the goal), the second argument should be instantiated to a list of some sort. For all other uses you are on your own. The system assumes that you are able to check that for yourself. Are you really able to do so? I, for my part, have quite some difficulties with this. ISO has a different system which also originates in DEC10.
Further, the implementation tries to be "reasonable" for unspecified cases. In particular, it tries to be steadfast in the first argument. So the minimum is first computed independently of the value of Min. Then, the resulting value is unified with Min. This robustness against misuses comes often at a price. In this case, min_member/2 always has to visit the entire list. No matter if this is useful or not. Consider
?- length(L, 1000000), maplist(=(1),L), min_member(2, L).
Clearly, 2 is not the minimum of L. This could be detected by considering the first element of the list only. Due to the generality of the definition, the entire list has to be visited.
This way of handling output unification is similarly handled in the standard. You can spot those cases when the (otherwise) declarative description (which is the first of a built-in) explicitly refers to unification, like
8.5.4 copy_term/2
8.5.4.1 Description
copy_term(Term_1, Term_2) is true iff Term_2 unifies
with a term T which is a renamed copy (7.1.6.2) of
Term_1.
or
8.4.3 sort/2
8.4.3.1 Description
sort(List, Sorted) is true iff Sorted unifies with
the sorted list of List (7.1.6.5).
Here are those arguments (in brackets) of built-ins that can only be understood as being output arguments. Note that there are many more which effectively are output arguments, but that do not need the process of unification after some operation. Think of 8.5.2 arg/3 (3) or 8.2.1 (=)/2 (2) or (1).
8.5.4 1 copy_term/2 (2),
8.4.2 compare/3 (1),
8.4.3 sort/2 (2),
8.4.4 keysort/2 (2),
8.10.1 findall/3 (3),
8.10.2 bagof/3 (3),
8.10.3 setof/3 (3).
So much for your direct questions, there are some more fundamental problems behind:
Term order
Historically, "standard" term order1 has been defined to permit the definition of setof/3 and sort/2 about 1982. (Prior to it, as in 1978, it was not mentioned in the DEC10 manual user's guide.)
From 1982 on, term order was frequently (erm, ab-) used to implement other orders, particularly, because DEC10 did not offer higher-order predicates directly. call/N was to be invented two years later (1984) ; but needed some more decades to be generally accepted. It is for this reason that Prolog programmers have a somewhat nonchalant attitude towards sorting. Often they intend to sort terms of a certain kind, but prefer to use sort/2 for this purpose — without any additional error checking. A further reason for this was the excellent performance of sort/2 beating various "efficient" libraries in other programming languages decades later (I believe STL had a bug to this end, too). Also the complete magic in the code - I remember one variable was named Omniumgatherum - did not invite copying and modifying the code.
Term order has two problems: variables (which can be further instantiated to invalidate the current ordering) and infinite terms. Both are handled in current implementations without producing an error, but with still undefined results. Yet, programmers assume that everything will work out. Ideally, there would be comparison predicates that produce
instantiation errors for unclear cases like this suggestion. And another error for incomparable infinite terms.
Both SICStus and SWI have min_member/2, but only SICStus has min_member/3 with an additional argument to specify the order employed. So the goal
?- min_member(=<, M, Ms).
behaves more to your expectations, but only for numbers (plus arithmetic expressions).
Footnotes:
1 I quote standard, in standard term order, for this notion existed since about 1982 whereas the standard was published 1995.
Clearly min_member/2 is not a true relation:
?- min_member(X, [X,0]), X = 1.
X = 1.
yet, after simply exchanging the two goals by (highly desirable) commutativity of conjunction, we get:
?- X = 1, min_member(X, [X,0]).
false.
This is clearly quite bad, as you correctly observe.
Constraints are a declarative solution for such problems. In the case of integers, finite domain constraints are a completely declarative solution for such problems.
Without constraints, it is best to throw an instantiation error when we know too little to give a sound answer.
This is a common property of many (all?) predicates that depend on the standard order of terms, while the order between two terms can change after unification. Baseline is the conjunction below, which cannot be reverted either:
?- X #< 2, X = 3.
X = 3.
Most predicates using a -Value annotation for an argument say that pred(Value) is the same
as pred(Var), Value = Var. Here is another example:
?- sort([2,X], [3,2]).
X = 3.
These predicates only represent clean relations if the input is ground. It is too much to demand the input to be ground though because they can be meaningfully used with variables, as long as the user is aware that s/he should not further instantiate any of the ordered terms. In that sense, I disagree with #mat. I do agree that constraints can surely make some of these relations sound.
This is how min_member/2 is implemented:
min_member(Min, [H|T]) :-
min_member_(T, H, Min).
min_member_([], Min, Min).
min_member_([H|T], Min0, Min) :-
( H #>= Min0
-> min_member_(T, Min0, Min)
; min_member_(T, H, Min)
).
So it seems that min_member/2 actually tries to unify Min (the first argument) with the smallest element in List in the standard order of terms.
I hope I am not off-topic with this third answer. I did not edit one of the previous two as I think it's a totally different idea. I was wondering if this undesired behaviour:
?- min_member(X, [A, B]), A = 3, B = 2.
X = A, A = 3,
B = 2.
can be avoided if some conditions can be postponed for the moment when A and B get instantiated.
promise_relation(Rel_2, X, Y):-
call(Rel_2, X, Y),
when(ground(X), call(Rel_2, X, Y)),
when(ground(Y), call(Rel_2, X, Y)).
min_member_1(Min, Lst):-
member(Min, Lst),
maplist(promise_relation(#=<, Min), Lst).
What I want from min_member_1(?Min, ?Lst) is to expresses a relation that says Min will always be lower (in the standard order of terms) than any of the elements in Lst.
?- min_member_1(X, L), L = [_,2,3,4], X = 1.
X = 1,
L = [1, 2, 3, 4] .
If variables get instantiated at a later time, the order in which they get bound becomes important as a comparison between a free variable and an instantiated one might be made.
?- min_member_1(X, [A,B,C]), B is 3, C is 4, A is 1.
X = A, A = 1,
B = 3,
C = 4 ;
false.
?- min_member_1(X, [A,B,C]), A is 1, B is 3, C is 4.
false.
But this can be avoided by unifying all of them in the same goal:
?- min_member_1(X, [A,B,C]), [A, B, C] = [1, 3, 4].
X = A, A = 1,
B = 3,
C = 4 ;
false.
Versions
If the comparisons are intended only for instantiated variables, promise_relation/3 can be changed to check the relation only when both variables get instantiated:
promise_relation(Rel_2, X, Y):-
when((ground(X), ground(Y)), call(Rel_2, X, Y)).
A simple test:
?- L = [_, _, _, _], min_member_1(X, L), L = [3,4,1,2].
L = [3, 4, 1, 2],
X = 1 ;
false.
! Edits were made to improve the initial post thanks to false's comments and suggestions.
I have an observation regarding your xmin_member implementation. It fails on this query:
?- xmin_member(1, [X, 2, 3]).
false.
I tried to include the case when the list might include free variables. So, I came up with this:
ymin_member(Min, Lst):-
member(Min, Lst),
maplist(#=<(Min), Lst).
Of course it's worse in terms of efficiency, but it works on that case:
?- ymin_member(1, [X, 2, 3]).
X = 1 ;
false.
?- ymin_member(X, [X, 2, 3]).
true ;
X = 2 ;
false.
I was wondering about a Prolog that could include a built-in call like this:
accum(generator, filter, accumulator)
Calculates all solutions to generator.
For each one, if filter can be proved, accumulator is proved.
Backtracks to find all solutions to filter and generator.
Accumulator may backtrack internally, but multiple proofs of accumulator are
conjoined, not backtracked.
So, for example, to sum a list without using recursion you could write:
X is 0, accum(member(Val,List), True, X is X + Val).
Is there any Prolog with this construct or an equivalent? Bear in mind that I am a bit of a newbie at Prolog and may be missing something obvious.
SWI-Prolog library(aggregate) has a powerful interface, for instance
aggregate_all(sum(Val), member(Val,List), Sum)
the (apparently simple) sharing of the variables among aggregation and generation is obtained with a predicate, foreach/2, that could interest you.
In SWI-Prolog you can do ?- edit(library(aggregate)). to study the internals...
library(aggregate) is relatively inefficient, but coupled with SWI-Prolog nb_ (non backtrackable) data structures should do its job very well...
About non backtrackable data structures: here is an example of my 'self built' accumulator, implemented by means of nb_setarg/3.
I assume you mean without explicit recursion? If so, you can use an implementation of the high-order predicate list fold left, together with a lambda expression to avoid the need of an auxiliary predicate. Using Logtalk as an example you can write:
?- Sum0 is 0, meta::fold_left([X,Y,Z]>>(Z is Y+X), Sum0, [1,2,3], Sum).
Sum0 = 0,
Sum = 6.
Logtalk can use as a backend compiler most Prolog implementations (http://logtalk.org/). You can also use Ulrich's lambda library (http://www.complang.tuwien.ac.at/ulrich/Prolog-inedit/ISO-Hiord.html) with a supported Prolog compiler together with a Prolog library providing the fold left predicate for the same result. Using now YAP as an example:
$ yap
...
?- use_module(library(lambda)).
...
?- use_module(library(maplist)).
...
?- Sum0 is 0, foldl(\X^Y^Z^(Z is Y+X), [1,2,3], Sum0, Sum).
Sum = 6,
Sum0 = 0.
Briefly, the fold left predicate iterates over a list, recursively applying the closure in its first argument to a list element and the accumulator, returning the final accumulator value.
In Mercury's standard library the "solutions" module provides functionality like this.
Note that X is X + Val does not assign a new value to X. It is a statement that is true if Val is zero, and false if it is any other number, which is probably not what you mean. Accumulators like this are typically expressed as a relation between the initial and final value.
In Mercury, your example could be written as:
:- import_module solutions.
...
sumlist(List, Sum) :-
Generator = (pred(Val::out) is nondet :- member(Val, List), true),
Accumulator = (pred(X::in, Y::in, Z::out) is det :- Z = X + Y),
aggregate(Generator, Accumulator, 0, Sum).
There is no need for a separate filter argument, as it can be included as part of the generator.
In SWI-Prolog, I have a list whose elements are pairs of the form Key-ValuesList. For instance, one such list may look like:
[1-[a,b],2-[],3-[c]]
I would like to transform this list into a nested list of pairs of the form Key-[Value], where Value is an element in ValuesList. The above example would be transformed into:
[[1-[a],2-[],3-[c]], [1-[b],2-[],3-[c]]]
My current solution is the following:
% all_pairs_lists(+InputList, -OutputLists).
all_pairs_lists([], [[]]).
all_pairs_lists([Key-[]|Values], CP) :-
!,
findall([Key-[]|R], (all_pairs_lists(Values,RCP), member(R,RCP)), CP).
all_pairs_lists([Key-Value|Values], CP) :-
findall([Key-[V]|R], (all_pairs_lists(Values,RCP), member(V,Value), member(R,RCP)), CP).
Using this predicate, a call of the form
all_pairs_lists([1-[a,b],2-[],3-[c]],OutputLists).
Binds the variable OutputLists to the desired result mentioned above. While it appears correct, this implementation causes an "Out of global stack" error when InputList has very long lists as values.
Is there a less stack consuming approach to doing this? It would seem like quite a common operation for this type of data structure.
Well, to sum it up, you're doing it wrong.
In Prolog, when we want to express a relation instead of a function (several results possible instead of one), we don't use findall/3 and member/2 directly. We rather state what the relation is and then maybe once it's done if we need a list of results we use findall/3.
Here what it means is that we want to express the following relation:
Take a list of Key-Values and return a list of Key-[Value] where Value is a member of the Values list.
We could do so as follows:
% The base case: handle the empty list
a_pair_list([], []).
% The case where the Values list is empty, then the resulting [Value] is []
a_pair_list([Key-[]|List], [Key-[]|Result]) :-
a_pair_list(List, Result).
% The case where the Values list is not empty, then Value is a member of Values.
a_pair_list([Key-[Not|Empty]|List], [Key-[Value]|Result]) :-
member(Value, [Not|Empty]),
a_pair_list(List, Result).
Once this relation is expressed, we can already obtain all the info we wish:
?- a_pair_list([1-[a, b], 2-[], 3-[c]], Result).
Result = [1-[a], 2-[], 3-[c]] ;
Result = [1-[b], 2-[], 3-[c]] ;
false.
The desired list is now just a fairly straight-forward findall/3 call away:
all_pairs_lists(Input, Output) :-
findall(Result, a_pair_list(Input, Result), Output).
The important thing to remember is that it's way better to stay away from extra logical stuff: !/0, findall/3, etc... because it's often leading to less general programs and/or less correct ones. Here since we can express the relation stated above in a pure and clean way, we should. This way we can limit the annoying use of findall/3 at the strict minimum.
As #Mog already explained clearly what the problem could be, here a version (ab)using of the basic 'functional' builtin for list handling:
all_pairs_lists(I, O) :-
findall(U, maplist(pairs_lists, I, U), O).
pairs_lists(K-[], K-[]) :- !.
pairs_lists(K-L, K-[R]) :- member(R, L).
test:
?- all_pairs_lists([1-[a,b],2-[],3-[c]],OutputLists).
OutputLists = [[1-[a], 2-[], 3-[c]], [1-[b], 2-[], 3-[c]]].