How to use member predicate to specify constraints in prolog - prolog

I'm trying to write a Prolog program which does the following:
I have some relations defined in the Relations list. (For example: [f1,s1] means f1 needs s1) Depending on what features(f1,f2,f3) are selected in the TargetFeat list, I would like to create Result list using constraint programming.
Here is a sample code:
Relations =[[f1, s1], [f2, s2], [f3, s3], [f3, s4]],
TargetFeat = [f3, f1],
Result = [],
member(f3,TargetFeat) #= member(s3,Result), %One of the constraints
labeling(Result).
This doesn't work because #= works only with arithmetic expressions as operands. What are the alternatives to achieve something like this ?

There are many possible ways to model such dependencies with constraints. I consider in this post CLP(FD) and CLP(B) constraints, because they are most commonly used for solving combinatorial tasks.
Consider first CLP(FD), which is more frequently used and more convenient in many ways. When using CLP(FD) constraints, you again have several options to represent your task. However, no matter which model you eventually choose, you must first switch all items in your representation to suitable entitites that the constraint solver can actually reason about. In the case of CLP(FD), this means switching your entities to integers.
Translating your entities to corresponding integers is very straight-forward, and it is one of the reasons why CLP(FD) constraints also suffice to model tasks over domains that actually do not contain integers, but can be mapped to integers. So, let us suppose you are not reasoning about features f1, f2 and f3, but about integers 0, 1, and 2, or any other set of integers that suits you.
You can directly translate your requirements to this new domain. For example, instead of:
[f1,s1] means: f1 needs s1
we can say for example:
0 -> 3 means: 0 needs 3
And this brings us already very close to CLP(FD) constraints that let us model the whole problem. We only need to make one more mental leap to obtain a representation that lets us model all requirements. Instead of concrete integers, we now use CLP(FD) variables to indicate whether or not a specific requirement must be met to obtain the desired features. We shall use the variables R1, R2, R3, ... to denote which requirements are needed, by using either 0 (not needed) or 1 (needed) for each of the possible requirements.
At this point, you must develop a clear mental model of what you actually want to describe. I explain what I have in mind: I want to describe a relation between three things:
a list Fs of features
a list Ds of dependencies between features and requirements
a list Rs of requirements
We have already considered how to represent all these entitites: (1) is a list of integers that represent the features we want to obtain. (2) is a list of F -> R pairs that mean "feature F needs requirement R", and (3) is a list of Boolean variables that indicate whether or not each requirement is eventually needed.
Now let us try to relate all these entitites to one another.
First things first: If no features are desired, it all is trivial:
features_dependencies_requirements([], _, _).
But what if a feature is actually desired? Well, it's simple: We only need to take into account the dependencies of that feature:
features_dependencies_requirements([F|Fs], Ds, Rs) :-
member(F->R, Ds),
so we have in R the requirement of feature F. Now we only need to find the suitable variable in Rs that denotes requirement R. But how do we find the right variable? After all, a Prolog variable "does not have a bow tie", or—to foreigners—lacks a mark by which we could distinguish it from others. So, at this point, we would actually find it convenient to be able to nicely pick a variable out of Rs given the name of its requirement. Let us hence suppose that we represent Rs as a list of pairs of the form I=R, where I is the integer that defines the requirement, and R is the Boolean indicator that denotes whether that requirement is needed. Given this representation, we can define the clause above in its entirety as follows:
features_dependencies_requirements([F|Fs], Ds, Rs) :-
member(F->I, Ds),
member(I=1, Rs),
features_dependencies_requirements(Fs, Ds, Rs).
That's it. This fully relates a list of features, dependencies and requirements in such a way that the third argument indicates which requirements are necessary to obtain the features.
At this point, the attentive reader will see that no CLP(FD) constraints whatsoever were actually used in the code above, and in fact the translation of features to integers was completely unnecessary. We can as well use atoms to denote features and requirements, using the exact same code shown above.
Sample query and answers:
?- features_dependencies_requirements([f3,f1],
[f1->s1,f2->s2,f3->s3,f3->s4],
[s1=S1,s2=S2,s3=S3,s4=S4]).
S1 = S3, S3 = 1 ;
S1 = S4, S4 = 1 ;
false.
Obviously, I have made the following assumption: The dependencies are disjunctive, which means that the feature can be implemented if at least one of the requirements is satisifed. If you want to turn this into a conjunction, you will obviously have to change this. You can start by representing dependencies as F -> [R1,R2,...R_n].
Other than that, can it still be useful to translate your entitites do integers? Yes, because many of your constraints can likely be formulated also with CLP(FD) constraints, and you need integers for this to work.
To get you started, here are two ways that may be usable in your case:
use constraint reification to express what implies what. For example: F #==> R.
use global constraints like table/2 that express relations.
Particularly in the first case, CLP(B) constraints may also be useful. You can always use Boolean variables to express whether a requirement must be met.

Not a solution but some observations that would not fit a comment.
Don't use lists to represent relations. For example, instead of [f1, s1], write requires(f1, s1). If these requirement are fixed, then define requires/2 as a predicate. If you need to identify or enumerate features, consider a feature/1 predicate. For example:
feature(f1).
feature(f2).
...
Same for s1, s2, ... E.g.
support(s1).
support(s2).
...

Related

How to iterate through structure?

If I have a list like: [atm(abd,bubu,ha), atm(aei),atm(xyz,huhu), atm(aabb,a,e,x)], how could I 'iterate' through the elements of one of the atm structures?
For example, for atm(abd, bubu, ha), I would like to write abd, bubu and ha.
The problem is that the structures have variable length.
Is there a way to transform the structure into a list? Thanks.
Using (=..)/2
#TopologicalSort has already given a nice answer, using (=..)/2 to convert a term to a list of functor and arguments.
This obviously solves the immediate problem very generally.
However, it comes with its own drawbacks too: First and most importantly, (=..)/2 is not a general relation. For example, we have:
?- X =.. Y.
ERROR: Arguments are not sufficiently instantiated
This means that we cannot use this construct to generate solutions. It works only if its arguments are sufficiently instantiated.
Second, using (=..)/2 also comes with the time and memory overhead of constructing and representing a list in addition to the term that is already there in a different form. (And, mutatis mutandis, in the other direction too of course.)
Thus, it may be worth to ask: Are there different ways to solve this task? Are they better suited?
Alternative 1: Doing it manually
How do I convert thee? Let me count the ways.
From the example you cite, we must be able to handle—in order of their appearance—terms of the following forms:
atm/3
atm/1
atm/2
atm/4
The point here is that the number of shown cases is finite, and so we can easily handle them all like this:
atm_list(atm(A), [A]).
atm_list(atm(A,B), [A,B]).
atm_list(atm(A,B,C), [A,B,C]).
atm_list(atm(A,B,C,D), [A,B,C,D]).
To convert a list of such terms, you can use maplist/2:
?- Ls = [atm(abd,bubu,ha), atm(aei),atm(xyz,huhu), atm(aabb,a,e,x)],
maplist(atm_list, Ls, Lists).
Ls = [atm(abd, bubu, ha), atm(aei), atm(xyz, huhu), atm(aabb, a, e, x)],
Lists = [[abd, bubu, ha], [aei], [xyz, huhu], [aabb, a, e, x]].
A major advantage is that this relation is very general and can also be used to generate answers:
?- atm_list(A, Ls).
A = atm(_27464, _27466, _27468),
Ls = [_27464, _27466, _27468] ;
A = atm(_27464),
Ls = [_27464] ;
A = atm(_27464, _27466),
Ls = [_27464, _27466] ;
A = atm(_27464, _27466, _27468, _27470),
Ls = [_27464, _27466, _27468, _27470].
This is also more efficient than using (=..)/2. Clearly, it can only be done if the number of arising cases is finite. (Exercise: Write a Prolog program that generates clauses for all integers 1..N).
Alternative 2: Using lists
There are several well-known criteria for judging whether lists are an appropriate data structure. For example:
Does the empty list make sense in your use case?
Are there sensible cases for all possible lengths?
etc.
Only you can answer this question for your particular use case, so I only show what it could look like: Suppose you represent your whole initial list as follows:
[[abd,bubu,ha],[aei],[xyz,huhu],[aab,a,e,x]]
Then the whole issue does not even arise, because the elements are already specified as lists. Thus, there is no more need to convert anything.
Sure.
If First is atm(abd,bubu,ha) (for example), this code will split it into a list you can go through.
First =.. List.
Then, List will be [atm, abd, bubu, ha].
IDK if this works in your particular version of PROLOG. I'm using SWI-PROLOG. If not, maybe your version has a similar predicate.
For more information, see http://www.swi-prolog.org/pldoc/doc_for?object=(%3D..)/2 .

Prolog manual or custom labeling

I am currently writing a solver for a floor planning problem in Prolog and have some issues with the labeling part.
The current problem is my constraints are posted but when I launch the labeling, it takes forever to find a solution. I would like to bring in some heuristics.
My question is, how do I manually label my variables ? I am afraid that after defining a clpfd variable like this :
X in Xinf..Xsup
and constraining it, If I do something like :
fd_sup(X, Xmax),
X = Xmax,
...
in my custom label, I won't be using the backtrack ability of Prolog to test the other values of X's domain. Am I wrong ?
Also, is there a smarter way to label my variables than writing custom labeling procedures ? My idea of heuristics would consist in trying extrema of a variable domain alternatively (like max(X), min(X), max(X-1), min(X-1) etc...)
Hope you can help me :)
It is not difficult to write a custom labeling procedure, and with most real problems you will eventually need one anyway in order to incorporate problem-specific heuristics.
The two main components of a labeling procedure are
variable selection: from all the remaining (i.e. not yet instantiated) problem variables, pick one to consider next.
value selection or branching: explore, via backtracking, two or more alternative sub-problems by reducing the chosen variable's domain in (usually) complementary ways.
Using this scheme, the default labeling procedure can be written as
label(Xs) :-
( select_variable(X, Xs, Xs1) ->
branch(X),
label(Xs1)
;
true % done, no variables left
).
select_variable(X, [X|Xs], Xs). % 'leftmost' strategy
branch(X) :- indomain(X).
You can now redefine select_variable/3 to implement techniques such as "first-fail", and redefine branch/1 to try domain values in different orders. As long as you make sure that branch/1 enumerates all of X's domain values on backtracking, your search remains complete.
Sometimes you want to try just one domain value first (say, one suggested by a heuristics), but, if it is no good, not commit to another value immediately.
Let's say that, as in your example, you want to try the maximum domain value first. You could write this as
branch(X) :-
fd_sup(X, Xmax),
(
X = Xmax % try the maximum
;
X #\= Xmax % otherwise exclude the maximum
).
Because the two cases are complementary and cover all possible values for X, your search is still complete. However, because of the second alternative, branch/1 can now succeed with an uninstantiated X, which means you must make sure in the labeling procedure that you don't lose this variable from your list. One possibility would be:
label(Xs) :-
( select_variable(X, Xs, Xs1) ->
branch(X),
( var(X) -> append(Xs1, [X], Xs2) ; Xs2=Xs1 ),
label(Xs2)
;
true % done, no variables left
).
First, always try built-in heuristics. ff is often a good strategy.
For custom labeling strategies, it is often easiest to first convert the domain to a list, then reorder the list, and then simply use member/2 to assign the values of the domain using the new order.
A good building black is dom_integers/2, relating a finite CLP(FD) domain to a list of integers:
:- use_module(library(clpfd)).
dom_integers(D, Is) :- phrase(dom_integers_(D), Is).
dom_integers_(I) --> { integer(I) }, [I].
dom_integers_(L..U) --> { numlist(L, U, Is) }, Is.
dom_integers_(D1\/D2) --> dom_integers_(D1), dom_integers_(D2).
Your specific strategy is easily expressed on a list of such ordered integers, relating these integers to a second list where the values occur in the order you describe:
outside_in([]) --> [].
outside_in([I]) --> [I].
outside_in([First|Rest0]) --> [First,Last],
{ append(Rest, [Last], Rest0) },
outside_in(Rest).
Sample query and result:
?- phrase(outside_in([1,2,3,4]), Is).
Is = [1, 4, 2, 3] ;
false.
Combining this with fd_dom/2 and dom_integers/2, we get (bindings for variables other than X omitted):
?- X in 10..20,
fd_dom(X, Dom),
dom_integers(Dom, Is0),
phrase(outside_in(Is0), Is),
member(X, Is).
X = 10 ;
X = 20 ;
X = 11 ;
X = 19 ;
X = 12 ;
X = 18 ;
etc.
Nondeterminism is preserved by member/2.
Make sure to distinguish labeling strategies from additional propagation. These two aspects are currently a bit mixed in your question.
In SWI-Prolog, there is a predicate called clpfd:contracting/1. It does what you describe: It tries values from the domain boundaries, and removes values that can be seen as inconsistent, i.e., for which it is known that no solution exists.
Therefore, if you have a list of variables Vs, you can try: clpfd:contracting(Vs), and see if this helps.
Note that this can also significantly slow down the search, though on the other hand, also help significantly to reduce the search space before even trying any labeling!
To complement the other answers (one contrasting labeling and propagation, one showing a dedicated labeling method), I now tackle a further very important aspect of this question:
Very often, when beginners complain about the speed of their code, it turns out that their code in fact doesn't even terminate! More efficiency would not help in that case.
Hence, this answer points you towards first ensuring actual termination of your relation.
The best way to ensure termination of CLP(FD) programs is to separate them into 2 parts:
the first, called the core relation, simply posts all constraints.
the second uses labeling/2 to perform the actual search.
Have you done this in your program? If not, please do. When this is done, make sure that the core relation, say solution/2 (where the arguments are: a term denoting the task instance, and the list of variables to be labeled) terminates universally by querying:
?- solution(Instance, Vs), false.
If this terminates, then the following also terminates:
?- solution(Instance, Vs), label(Vs), false.
Of course, in larger tasks, you have no chance to actually witness the termination of the latter query, but a good chance to witness the termination of the first query, because setting up the constraints is often much faster than actually obtaining even a a single solution.
Therefore, test whether your core relation terminates!
This follows up on this previous answer by #mat.
If you have got some more CPU cycles to burn, try shave_zs/1 as defined in this previous answer.
shave_zs/1 kind of works like the auxiliary library predicate clpfd:contracting/1. Unlike contracting/1, however, all values are "up for grabs"—not just the ones at the boundary. YMMV!

Prolog CLP order of operations?

Hi hopefully someone can help me. I was just wondering if my code below was sufficient in setting up a matrix of 12 x 12 and, assuming the 'constrain(M)' calls all the correct constraints which are defined in rules lower down, labelling each of the rows? It's failing at the moment and I've traced my constraints so I know they all work but didn't know whether it was because I'm calling them outside of the main predicate?
matrix(M) :-
M = [R1,R2,R3,R4,R5,R6,R7,R8,R9,R10,R11,R12],
R1 = [A,B,C,D,E,F,G,H,I,J,K,L],
R2 = [A2,B2,C2,D2,E2,F2,G2,H2,I2,J2,K2,L2],
R3 = [A3,B3,C3,D3,E3,F3,G3,H3,I3,J3,K3,L3],
R4 = [A4,B4,C4,D4,E4,F4,G4,H4,I4,J4,K4,L4],
R5 = [A5,B5,C5,D5,E5,F5,G5,H5,I5,J5,K5,L5],
R6 = [A6,B6,C6,D6,E6,F6,G6,H6,I6,J6,K6,L6],
R7 = [A7,B7,C7,D7,E7,F7,G7,H7,I7,J7,K7,L7],
R8 = [A8,B8,C8,D8,E8,F8,G8,H8,I8,J8,K8,L8],
R9 = [A9,B9,C9,D9,E9,F9,G9,H9,I9,J9,K9,L9],
R10 = [A10,B10,C10,D10,E10,F10,G10,H10,I10,J10,K10,L10],
R11 = [A11,B11,C11,D11,E11,F11,G11,H11,I11,J11,K11,L11],
R12 = [A12,B12,C12,D12,E12,F12,G12,H12,I12,J12,K12,L12],
constrain(M),
labeling([],R1),
labeling([],R2),
labeling([],R3),
labeling([],R4),
labeling([],R5),
labeling([],R6),
labeling([],R7),
labeling([],R8),
labeling([],R9),
labeling([],R10),
labeling([],R11),
labeling([],R12).
You should always separate the constraint posting from the actual search (labeling/2).
The reason is clear: It can often be extremely expensive to search for concrete solutions. Posting the constraints, on the other hand, is often very fast.
If, as in your case, the two parts are uncleanly mixed, you cannot tell easily which part is responsible if there are unexpected problems such as nontermination.
In your case, the only thing you should improve in the main predicate is enforcing said separation between constraint posting and search.
The mistake that causes unexpected failure is most likely contained in one of the rules you did not post here. You can find out which rules are involved in the failure by systematically replacing the goals in which they are called by true. Thus, there's no need for tracing: You can debug CLP(FD) programs declaratively in this way.
EDIT: Here is more information about the separation between posting constraints and the search for concrete solutions. As introduced in GUPU, we will use the notion of core relation, which has the following properties:
By convention, its name ends with an underscore _.
Also by convention, its last argument is the list of variables that need to be labeled.
It posts the CLP(FD) constraints. This is also called the (constraint) modeling part or (constraint) model.
It doesn't use labeling/2.
The search part is usually performed by label/1 or labeling/2.
Suppose you have a predicate where you intermingle these two aspects, such as in your current case:
matrix(M) :-
constraints_hold(M),
... relate M to variables Vs ...
labeling(Strategy, Vs).
Obviously, for the reasons explained above, the call of labeling/2 is the part we want to remove from this predicate. Of course, as you observe, we still want to somehow access the variables that are supposed to be labeled.
We do this as follows:
We introduce a new argument to the core relation to pass around the list of finite domain variables that need to be labeled.
By convention, we reflect the additional argument by appending an underscore (_) to the predicate name.
So, we obtain the following core relation:
matrix_(M, Vs) :-
constraints_hold(M),
... relate M to variables Vs ...
The only missing part (which you haven't done yet, but which you should have done in any case), is stating the relation between the object of interest (in this case: the matrix) and the finite domain variables. This is the part I leave as a simple exercise for you. Hint: append/2.
Once you have done all this, you can solve the whole task by combining the core relation and labeling/2 in a single query or predicate:
?- matrix_(M, Vs), labeling(Strategy, Vs).
Note that this separation between core relation and search:
makes it extremely easy to try different labeling strategies without recompiling your program.
allows you to determine important procedural properties of the core relation without needing to search for concrete solutions.
Use the introduction and explanation of this important separation as an indicator when judging the quality of any text about CLP(FD) constraints.

How to avoid using assert and retractall in Prolog to implement global (or state) variables

I often end up writing code in Prolog which involves some arithmetic calculation (or state information important throughout the program), by means of first obtaining the value stored in a predicate, then recalculating the value and finally storing the value using retractall and assert because in Prolog we cannot assign values to variable twice using is (thus making almost every variable that needs modification, global). I have come to know that this is not a good practice in Prolog. In this regard I would like to ask:
Why is it a bad practice in Prolog (though i myself don't like to go through the above mentioned steps just to have have a kind of flexible (modifiable) variable)?
What are some general ways to avoid this practice? Small examples will be greatly appreciated.
P.S. I just started learning Prolog. I do have programming experience in languages like C.
Edited for further clarification
A bad example (in win-prolog) of what I want to say is given below:
:- dynamic(value/1).
:- assert(value(0)).
adds :-
value(X),
NewX is X + 4,
retractall(value(_)),
assert(value(NewX)).
mults :-
value(Y),
NewY is Y * 2,
retractall(value(_)),
assert(value(NewY)).
start :-
retractall(value(_)),
assert(value(3)),
adds,
mults,
value(Q),
write(Q).
Then we can query like:
?- start.
Here, it is very trivial, but in real program and application, the above shown method of global variable becomes unavoidable. Sometimes the list given above like assert(value(0))... grows very long with many more assert predicates for defining more variables. This is done to make communication of the values between different functions possible and to store states of variables during the runtime of program.
Finally, I'd like to know one more thing:
When does the practice mentioned above become unavoidable in spite of various solutions suggested by you to avoid it?
The general way to avoid this is to think in terms of relations between states of your computations: You use one argument to hold the state that is relevant to your program before a calculation, and a second argument that describes the state after some calculation. For example, to describe a sequence of arithmetic operations on a value V0, you can use:
state0_state(V0, V) :-
operation1_result(V0, V1),
operation2_result(V1, V2),
operation3_result(V2, V).
Notice how the state (in your case: the arithmetic value) is threaded through the predicates. The naming convention V0 -> V1 -> ... -> V scales easily to any number of operations and helps to keep in mind that V0 is the initial value, and V is the value after the various operations have been applied. Each predicate that needs to access or modify the state will have an argument that allows you to pass it the state.
A huge advantage of threading the state through like this is that you can easily reason about each operation in isolation: You can test it, debug it, analyze it with other tools etc., without having to set up any implicit global state. As another huge benefit, you can then use your programs in more directions provided you are using sufficiently general predicates. For example, you can ask: Which initial values lead to a given outcome?
?- state0_state(V0, given_outcome).
This is of course not readily possible when using the imperative style. You should therefore use constraints instead of is/2, because is/2 only works in one direction. Constraints are much easier to use and a more general modern alternative to low-level arithmetic.
The dynamic database is also slower than threading states through in variables, because it performs indexing etc. on each assertz/1.
1 - it's bad practice because destroys the declarative model that (pure) Prolog programs exhibit.
Then the programmer must think in procedural terms, and the procedural model of Prolog is rather complicate and difficult to follow.
Specifically, we must be able to decide about the validity of asserted knowledge while the programs backtracks, i.e. follow alternative paths to those already tried, that (maybe) caused the assertions.
2 - We need additional variables to keep the state. A practical, maybe not very intuitive way, is using grammar rules (a DCG) instead of plain predicates. Grammar rules are translated adding two list arguments, normally hidden, and we can use those arguments to pass around the state implicitly, and reference/change it only where needed.
A really interesting introduction is here: DCGs in Prolog by Markus Triska. Look for Implicitly passing states around: you'll find this enlighting small example:
num_leaves(nil), [N1] --> [N0], { N1 is N0 + 1 }.
num_leaves(node(_,Left,Right)) -->
num_leaves(Left),
num_leaves(Right).
More generally, and for further practical examples, see Thinking in States, from the same author.
edit: generally, assert/retract are required only if you need to change the database, or keep track of computation result along backtracking. A simple example from my (very) old Prolog interpreter:
findall_p(X,G,_):-
asserta(found('$mark')),
call(G),
asserta(found(X)),
fail.
findall_p(_,_,N) :-
collect_found([],N),
!.
collect_found(S,L) :-
getnext(X),
!,
collect_found([X|S],L).
collect_found(L,L).
getnext(X) :-
retract(found(X)),
!,
X \= '$mark'.
findall/3 can be seen as the basic all solutions predicate. That code should be the very same from Clockins-Mellish textbook - Programming in Prolog. I used it while testing the 'real' findall/3 I implemented. You can see that it's not 'reentrant', because of the '$mark' aliased.

What would cause Prolog to succeed on a match, but fail when asked to label outputs?

I'm trying to solve a logic puzzle with Prolog, as a learning exercise, and I think I've correctly mapped the problem using the GNU Prolog finite domain solver.
When I run the solve function, Prolog spits back: yes and a list of variables all bounded in the range 0..1 (booleans, as I've so constrained them). The problem is, when I try to add a fd_labeling(Solution) clause, Prolog about faces and spits out: no.
I'm new to this language and I can't seem to find any course of attack to figure out why everything seems to work until I actually ask it to label the answers...
Apparently, you didn't "correctly" map the problem to FD, since you get a "no" when you try to label the variables.
What you do in Constraint Logic Programming is set up a constraint model, where you have variables with a domain (in your case booleans with the domain [0,1]), and a number of constraints between these variables. Each constraint has a propagation rule that tries to achieve consistency for the domains of the variables on which the constraint is posted. Values that are not consistent are removed from the domains. There are several types of consistency, but they have one thing in common: the constraints usually won't by themselves give you a full solution, or even tell you whether there is a solution for the constraint model.
As an example, say you have two variables X and Y, both with domains [1..10], and the constraint X < Y. Then the propagation rule will remove the value 1 from the domain of Y and remove 10 from the domain of X. It will then stop, since the domains are now consistent: for each value in one domain there exists a value in the other domain so that the constraint is fulfilled.
In order to get a solution (where all variables are bound to values), you need to label variables. Each labeling will wake up the constraints attached to the labeled variable, triggering another round of propagation. This will lead to a solution (all variables bound to values, answer: yes) or failure (in each branch of the search tree, some variable ends up with an empty domain, answer: no)
Since each constraint is only aiming for consistency of the domains of the variables on which it is posted, it is possible that an infeasibility that arises from a combination of constraints is not detected during the propagation stage. For example, three variables X,Y,Z with domains [1..2], and pairwise inequality constraints. This seems to have happened with your constraint model.
If you are sure that there must be a solution to the puzzle, then your constraint model contains some infeasibility. Maybe a sharp look at the constraints is already sufficient to spot it.
If you don't see any obvious infeasibility (e.g., some contradicting constraints like the inequality example above), you need to debug your program. If it's possible, don't use a built-in labeling predicate, but write your own. Then you can add some output predicate that allows you to trace what variable was instantiated and what changes in the boolean decision variables this caused or whether it led to a failure.
(#twinterer already gave an explanation, my answer tries to take it from a different angle)
When you enter a query to Prolog what you get back is an answer. Often an answer contains a solution, sometimes it contains several solutions and sometimes it does not contain any solution at all. Quite often these two notions are confused. Let's look at examples with GNU Prolog:
| ?- length(Vs,3), fd_domain_bool(Vs).
Vs = [_#0(0..1),_#19(0..1),_#38(0..1)]
yes
Here, we have an answer that contains 8 solutions. That is:
| ?- length(Vs,3), fd_domain_bool(Vs), fd_labeling(Vs).
Vs = [0,0,0] ? ;
Vs = [0,0,1] ? ;
...
Vs = [1,1,1]
yes
And now another query. That is the example #twinterer referred to.
| ?- length(Vs,3), fd_domain_bool(Vs), fd_all_different(Vs).
Vs = [_#0(0..1),_#19(0..1),_#38(0..1)]
yes
The answer looks the same as before. However, it does no longer contain a solution.
| ?- length(Vs,3), fd_domain_bool(Vs), fd_all_different(Vs), fd_labeling(Vs).
no
Ideally in such a case, the toplevel would not say "yes" but "maybe". In fact, CLP(R), one of the very first constraint systems, did this.
Another way to make this a little bit less mysterious is to show the actual constraints involved. SWI does this:
?- length(Vs,3), Vs ins 0..1, all_different(Vs).
Vs = [_G565,_G568,_G571],
_G565 in 0..1,
all_different([_G565,_G568,_G571]),
_G568 in 0..1,
_G571 in 0..1.
?- length(Vs,3), Vs ins 0..1, all_different(Vs), labeling([], Vs).
false.
So SWI shows you all constraints that have to be satisfied to get a concrete solution. Read SWI's answer as: Yes, there is a solution, provided all this fine print is true!
Alas, the fine print is false.
And yet another way to solve this problem is to get an implementation of all_different/1 with stronger consistency. But this only works in specific cases.
?- length(Vs,3), Vs ins 0..1, all_distinct(Vs).
false.
In the general case you cannot expect a system to maintain global consistency. Reasons:
Maintaining consistency can be very expensive. It is often better to delegate such decisions to labeling. In fact, the simple all_different/1 is often faster than all_distinct/1.
Better consistency algorithms are often very complex.
In the general case, maintaining global consistency is an undecidable problem.

Resources