Prolog manual or custom labeling - prolog

I am currently writing a solver for a floor planning problem in Prolog and have some issues with the labeling part.
The current problem is my constraints are posted but when I launch the labeling, it takes forever to find a solution. I would like to bring in some heuristics.
My question is, how do I manually label my variables ? I am afraid that after defining a clpfd variable like this :
X in Xinf..Xsup
and constraining it, If I do something like :
fd_sup(X, Xmax),
X = Xmax,
...
in my custom label, I won't be using the backtrack ability of Prolog to test the other values of X's domain. Am I wrong ?
Also, is there a smarter way to label my variables than writing custom labeling procedures ? My idea of heuristics would consist in trying extrema of a variable domain alternatively (like max(X), min(X), max(X-1), min(X-1) etc...)
Hope you can help me :)

It is not difficult to write a custom labeling procedure, and with most real problems you will eventually need one anyway in order to incorporate problem-specific heuristics.
The two main components of a labeling procedure are
variable selection: from all the remaining (i.e. not yet instantiated) problem variables, pick one to consider next.
value selection or branching: explore, via backtracking, two or more alternative sub-problems by reducing the chosen variable's domain in (usually) complementary ways.
Using this scheme, the default labeling procedure can be written as
label(Xs) :-
( select_variable(X, Xs, Xs1) ->
branch(X),
label(Xs1)
;
true % done, no variables left
).
select_variable(X, [X|Xs], Xs). % 'leftmost' strategy
branch(X) :- indomain(X).
You can now redefine select_variable/3 to implement techniques such as "first-fail", and redefine branch/1 to try domain values in different orders. As long as you make sure that branch/1 enumerates all of X's domain values on backtracking, your search remains complete.
Sometimes you want to try just one domain value first (say, one suggested by a heuristics), but, if it is no good, not commit to another value immediately.
Let's say that, as in your example, you want to try the maximum domain value first. You could write this as
branch(X) :-
fd_sup(X, Xmax),
(
X = Xmax % try the maximum
;
X #\= Xmax % otherwise exclude the maximum
).
Because the two cases are complementary and cover all possible values for X, your search is still complete. However, because of the second alternative, branch/1 can now succeed with an uninstantiated X, which means you must make sure in the labeling procedure that you don't lose this variable from your list. One possibility would be:
label(Xs) :-
( select_variable(X, Xs, Xs1) ->
branch(X),
( var(X) -> append(Xs1, [X], Xs2) ; Xs2=Xs1 ),
label(Xs2)
;
true % done, no variables left
).

First, always try built-in heuristics. ff is often a good strategy.
For custom labeling strategies, it is often easiest to first convert the domain to a list, then reorder the list, and then simply use member/2 to assign the values of the domain using the new order.
A good building black is dom_integers/2, relating a finite CLP(FD) domain to a list of integers:
:- use_module(library(clpfd)).
dom_integers(D, Is) :- phrase(dom_integers_(D), Is).
dom_integers_(I) --> { integer(I) }, [I].
dom_integers_(L..U) --> { numlist(L, U, Is) }, Is.
dom_integers_(D1\/D2) --> dom_integers_(D1), dom_integers_(D2).
Your specific strategy is easily expressed on a list of such ordered integers, relating these integers to a second list where the values occur in the order you describe:
outside_in([]) --> [].
outside_in([I]) --> [I].
outside_in([First|Rest0]) --> [First,Last],
{ append(Rest, [Last], Rest0) },
outside_in(Rest).
Sample query and result:
?- phrase(outside_in([1,2,3,4]), Is).
Is = [1, 4, 2, 3] ;
false.
Combining this with fd_dom/2 and dom_integers/2, we get (bindings for variables other than X omitted):
?- X in 10..20,
fd_dom(X, Dom),
dom_integers(Dom, Is0),
phrase(outside_in(Is0), Is),
member(X, Is).
X = 10 ;
X = 20 ;
X = 11 ;
X = 19 ;
X = 12 ;
X = 18 ;
etc.
Nondeterminism is preserved by member/2.

Make sure to distinguish labeling strategies from additional propagation. These two aspects are currently a bit mixed in your question.
In SWI-Prolog, there is a predicate called clpfd:contracting/1. It does what you describe: It tries values from the domain boundaries, and removes values that can be seen as inconsistent, i.e., for which it is known that no solution exists.
Therefore, if you have a list of variables Vs, you can try: clpfd:contracting(Vs), and see if this helps.
Note that this can also significantly slow down the search, though on the other hand, also help significantly to reduce the search space before even trying any labeling!

To complement the other answers (one contrasting labeling and propagation, one showing a dedicated labeling method), I now tackle a further very important aspect of this question:
Very often, when beginners complain about the speed of their code, it turns out that their code in fact doesn't even terminate! More efficiency would not help in that case.
Hence, this answer points you towards first ensuring actual termination of your relation.
The best way to ensure termination of CLP(FD) programs is to separate them into 2 parts:
the first, called the core relation, simply posts all constraints.
the second uses labeling/2 to perform the actual search.
Have you done this in your program? If not, please do. When this is done, make sure that the core relation, say solution/2 (where the arguments are: a term denoting the task instance, and the list of variables to be labeled) terminates universally by querying:
?- solution(Instance, Vs), false.
If this terminates, then the following also terminates:
?- solution(Instance, Vs), label(Vs), false.
Of course, in larger tasks, you have no chance to actually witness the termination of the latter query, but a good chance to witness the termination of the first query, because setting up the constraints is often much faster than actually obtaining even a a single solution.
Therefore, test whether your core relation terminates!

This follows up on this previous answer by #mat.
If you have got some more CPU cycles to burn, try shave_zs/1 as defined in this previous answer.
shave_zs/1 kind of works like the auxiliary library predicate clpfd:contracting/1. Unlike contracting/1, however, all values are "up for grabs"—not just the ones at the boundary. YMMV!

Related

Branch and bound without assert / retract

This exercise asked me to find the best combination of three products, given the prices and specific combinations to avoid. The textbook employs assertz and retractall to emulate a state variable.
price(a1, 1900).
price(a2, 750).
price(a3, 900).
price(b1, 300).
price(b2, 500).
price(b3, 450).
price(b4, 600).
price(c1, 700).
price(c2, 850).
incompatible(a2, c1).
incompatible(b2, c2).
incompatible(b3, c2).
incompatible(a2, b4).
incompatible(a1, b3).
incompatible(a3, b3).
:- dynamic bound/1.
bound(5000).
solution(A, B, C, P) :-
member(A, [a1, a2, a3]),
price(A, PA),
member(B, [b1, b2, b3, b4]),
\+ incompatible(A, B),
price(B, PB),
P0 is PA + PB,
bound(Bound),
write('Current bound: '), writeln(Bound),
P0 =< Bound,
member(C, [c1, c2]),
\+ incompatible(A, C),
\+ incompatible(B, C),
price(C, PC),
P is PA + PB + PC,
P =< Bound,
retractall(bound(_)),
assertz(bound(P)).
Is it possible to use branch and bound in Prolog without resorting to dynamic predicates?
Is there a consensus on state variables in Prolog?
Is there a way to restrict the scope of a state variable (or whatever the proxy would be) to a single rule?
The biggest problem with this code is that it is not re-entrant. It implicitly assumes that at any point in time there is exactly one instance of searching. But if some part inside wants to use a similar search all by themselves, no warning no error prevents this from happening.
There is not much consensus on how the precise mechanism should look like, existing systems all differ. To understand this, look at the implementation of findall/3 or call_nth/2 where there are specific solutions for SICStus, SWI, and Eclipse. Precisely this mechanism would be needed here too.
But also consider to make this search more generic. Presumably the textbook you are referring to was written prior to the general acceptance of call/N.
Is it possible to use branch and bound in Prolog without resorting to dynamic predicates?
Yes, you just hand the bounds around as additional arguments across recursive calls (as is done in functional programming). IMHO, this is actually a cleaner solutions than asserting into the Prolog database. The trick of changing the database and then performing a redo is confusing, as it's not the same program that is being redone. It's a "neat trick" to put into a textbook though.
Is there a consensus on state variables in Prolog?
Just hand them around as arguments. If need be, you can use Global Variables if supported. See also: Database in SWI-Prolog. To keep argument lists small-ish, you may want to consider using associative arrays (SWI-Prolog dicts, library(assoc)) as "argument bundle". But as usual, this has implications on efficiency.
Is there a way to restrict the scope of a state variable (or whatever the proxy would be) to a single rule?
Evidently, arguments are local, but there is, sadly, nothing beyond. I would like to have "local visibility scopes", especially for predicates so as to unequivocally associate "helper predicates" to their "parent predicate" but "modules" (implemented variously) is the best we have. (This is not made better by a tradition which finds module sizes of enormous size acceptable, something for which I would berate the intern in no uncertain terms, same as writing an object of several thousand lines, but I digress)
Your solution is not always using the proper Bound upon backtracking. The problem is that the variable gets bound too soon and then even if the fact gets updated with assertz/retract the variable will still be bound to the previous value. You can see the effects by letting the toplevel backtrack through all the solutions. There are some solutions intersped that have higher cost than a previous one.
Here goes an iterative solution customized to your problem, though you could probably make it more general by using call/N instead of the fixed calls to price/2 and any_incompatible/2
solution2(Bound, Best):-
branch_and_bound([[a1,a2,a3],[b1,b2,b3,b4],[c1,c2]], Bound, [], Best).
branch_and_bound([[]|_], _, Best, Best).
branch_and_bound([[Item|Layer]|Layers], Bound, CurrentBest, Best):-
price(Item, ItemPrice),
( ItemPrice >= Bound
-> Bound1-CurrentBest1=Bound-CurrentBest
; branch_and_bound(Layers, Bound, Bound1, current(ItemPrice, [Item]), CurrentBest, CurrentBest1)
),
branch_and_bound([Layer|Layers], Bound1, CurrentBest1, Best).
branch_and_bound([], Bound, CurrentPrice, current(CurrentPrice, Items), PreviousBest, [best(CurrentPrice, RItems)|OtherBest]):-
reverse(Items, RItems),
( CurrentPrice==Bound -> OtherBest=PreviousBest ; OtherBest=[] ).
branch_and_bound([[]|_], Bound, Bound, _, Best, Best).
branch_and_bound([[Item|Layer]|Layers], Bound, Bound2, current(CurrentPrice, Items), CurrentBest, Best):-
price(Item, ItemPrice),
NewPrice is CurrentPrice + ItemPrice,
(
( NewPrice > Bound ; any_incompatible(Items, Item) )
-> Bound1-CurrentBest1=Bound-CurrentBest
; branch_and_bound(Layers, Bound, Bound1, current(NewPrice, [Item|Items]), CurrentBest, CurrentBest1)
),
branch_and_bound([Layer|Layers], Bound1, Bound2, current(CurrentPrice, Items), CurrentBest1, Best).
any_incompatible([OneItem|Items], Item):-
incompatible(OneItem, Item) -> true ; any_incompatible(Items, Item).
It will list all the best solutions, trimming the search space with the current Bound defined by the currently found best solution.
Sample run:
?- solution2(5000, Best).
Best = [best(1900, [a3, b1, c1]), best(1900, [a2, b1, c2])].

Finding whether a number is a multiple of another

Looking at the code below:
multiple(X,0).
multiple(X,Y) :- lt(0,X), lt(0,Y), diff(Y,X,D), multiple(X,D).
There happens to be something wrong. For your reference:
lt/2 is whether the first argument is less than the second.
diff/3 is whether the third argument is equal to the first argument minus the second.
lt/2 and diff/3 are defined correctly.
Is there a logical mistake in the definition? Is assuming that 0 is the multiple of every number problematic or is the logical mistake somewhere else? I get correct answers but the query goes to infinite loop I think.
EDIT:
here are the other definitions.
natNum(0).
natNum(s(X)) :- natNum(X).
lt(0,s(X)) :- natNum(X).
lt(s(X),s(Y)) :- lt(X,Y).
sum(0,X,X).
sum(s(X),Y,s(Z)) :- sum(X,Y,Z).
diff(X,Y,Z) :- sum(Z,Y,X).
?- multiple(X, s(s(s(s(s(s(0))))))).
where s(0) is 1, s(s(0)) is 2 etc. It gives all the desired answers for X but after the last answer, it gets stuck. I assume in an infinite recursive loop?
What is happening in your program? Does it loop forever, or does it only take some time since you haven't updated your hardware in recent decades? We cannot tell. (Actually, we could tell by looking at your program, but that is much too complex for the moment).
What we can do with ease is narrow down the source of this costly effort. And this, without a deep understanding of your program. Let's start with the query:
?- multiple(X, s(s(s(s(s(s(0))))))).
X = s(0)
; X = s(s(0))
; X = s(s(s(0)))
; X = s(s(s(s(s(s(0))))))
; loops. % or takes too long
Isn't there an easier way to do this? All this semicolon typing. Instead, simply add false to your query. In this manner the solutions found are no longer shown and we can concentrate on this annoying looping. And, if we're at it, you can also add false goals into your program! By such goals the number of inferences might be reduced (or stays the same). And if the resulting fragment (called a failure-slice) is looping, then this is a reason why your original program loops:
multiple(_X,0) :- false.
multiple(X,Y) :- lt(0,X), false, lt(0,Y), diff(Y,X,D), multiple(X,D).
natNum(0) :- false.
natNum(s(X)) :- natNum(X), false.
lt(0,s(X)) :- natNum(X), false.
lt(s(X),s(Y)) :- false, lt(X,Y).
?- multiple(X, s(s(s(s(s(s(0))))))), false.
loops.
Do your recognize your program? Only those parts remained that are needed for a loop. And, actually in this case, we have an infinite loop.
To fix this, we need to modify something in the remaining, visible part. I'd go for lt/2 whose first clause can be generalized to lt(0, s(_)).
But wait! Why is it OK to generalize away the requirement that we have a natural number? Look at the fact multiple(X,0). which you have written. You have not demanded that X is a natural number either. This kind of over-generalizations often appears in Prolog programs. They improve termination properties at a relatively low price: Sometimes they are too general but all terms that additionally fit into the generalization are not natural numbers. They are terms like any or [a,b,c], so if they appear somewhere you know that they do not belong to the solutions.
So the idea was to put false goals into your program such that the resulting program (failure-slice) still loops. In the worst case you put false at a wrong place and the program terminates. By trial-and-error you get a minimal failure-slice. All those things that are now stroked through are irrelevant! In particular diff/3. So no need to understand it (for the moment). It suffices to look at the remaining program.

How to use member predicate to specify constraints in prolog

I'm trying to write a Prolog program which does the following:
I have some relations defined in the Relations list. (For example: [f1,s1] means f1 needs s1) Depending on what features(f1,f2,f3) are selected in the TargetFeat list, I would like to create Result list using constraint programming.
Here is a sample code:
Relations =[[f1, s1], [f2, s2], [f3, s3], [f3, s4]],
TargetFeat = [f3, f1],
Result = [],
member(f3,TargetFeat) #= member(s3,Result), %One of the constraints
labeling(Result).
This doesn't work because #= works only with arithmetic expressions as operands. What are the alternatives to achieve something like this ?
There are many possible ways to model such dependencies with constraints. I consider in this post CLP(FD) and CLP(B) constraints, because they are most commonly used for solving combinatorial tasks.
Consider first CLP(FD), which is more frequently used and more convenient in many ways. When using CLP(FD) constraints, you again have several options to represent your task. However, no matter which model you eventually choose, you must first switch all items in your representation to suitable entitites that the constraint solver can actually reason about. In the case of CLP(FD), this means switching your entities to integers.
Translating your entities to corresponding integers is very straight-forward, and it is one of the reasons why CLP(FD) constraints also suffice to model tasks over domains that actually do not contain integers, but can be mapped to integers. So, let us suppose you are not reasoning about features f1, f2 and f3, but about integers 0, 1, and 2, or any other set of integers that suits you.
You can directly translate your requirements to this new domain. For example, instead of:
[f1,s1] means: f1 needs s1
we can say for example:
0 -> 3 means: 0 needs 3
And this brings us already very close to CLP(FD) constraints that let us model the whole problem. We only need to make one more mental leap to obtain a representation that lets us model all requirements. Instead of concrete integers, we now use CLP(FD) variables to indicate whether or not a specific requirement must be met to obtain the desired features. We shall use the variables R1, R2, R3, ... to denote which requirements are needed, by using either 0 (not needed) or 1 (needed) for each of the possible requirements.
At this point, you must develop a clear mental model of what you actually want to describe. I explain what I have in mind: I want to describe a relation between three things:
a list Fs of features
a list Ds of dependencies between features and requirements
a list Rs of requirements
We have already considered how to represent all these entitites: (1) is a list of integers that represent the features we want to obtain. (2) is a list of F -> R pairs that mean "feature F needs requirement R", and (3) is a list of Boolean variables that indicate whether or not each requirement is eventually needed.
Now let us try to relate all these entitites to one another.
First things first: If no features are desired, it all is trivial:
features_dependencies_requirements([], _, _).
But what if a feature is actually desired? Well, it's simple: We only need to take into account the dependencies of that feature:
features_dependencies_requirements([F|Fs], Ds, Rs) :-
member(F->R, Ds),
so we have in R the requirement of feature F. Now we only need to find the suitable variable in Rs that denotes requirement R. But how do we find the right variable? After all, a Prolog variable "does not have a bow tie", or—to foreigners—lacks a mark by which we could distinguish it from others. So, at this point, we would actually find it convenient to be able to nicely pick a variable out of Rs given the name of its requirement. Let us hence suppose that we represent Rs as a list of pairs of the form I=R, where I is the integer that defines the requirement, and R is the Boolean indicator that denotes whether that requirement is needed. Given this representation, we can define the clause above in its entirety as follows:
features_dependencies_requirements([F|Fs], Ds, Rs) :-
member(F->I, Ds),
member(I=1, Rs),
features_dependencies_requirements(Fs, Ds, Rs).
That's it. This fully relates a list of features, dependencies and requirements in such a way that the third argument indicates which requirements are necessary to obtain the features.
At this point, the attentive reader will see that no CLP(FD) constraints whatsoever were actually used in the code above, and in fact the translation of features to integers was completely unnecessary. We can as well use atoms to denote features and requirements, using the exact same code shown above.
Sample query and answers:
?- features_dependencies_requirements([f3,f1],
[f1->s1,f2->s2,f3->s3,f3->s4],
[s1=S1,s2=S2,s3=S3,s4=S4]).
S1 = S3, S3 = 1 ;
S1 = S4, S4 = 1 ;
false.
Obviously, I have made the following assumption: The dependencies are disjunctive, which means that the feature can be implemented if at least one of the requirements is satisifed. If you want to turn this into a conjunction, you will obviously have to change this. You can start by representing dependencies as F -> [R1,R2,...R_n].
Other than that, can it still be useful to translate your entitites do integers? Yes, because many of your constraints can likely be formulated also with CLP(FD) constraints, and you need integers for this to work.
To get you started, here are two ways that may be usable in your case:
use constraint reification to express what implies what. For example: F #==> R.
use global constraints like table/2 that express relations.
Particularly in the first case, CLP(B) constraints may also be useful. You can always use Boolean variables to express whether a requirement must be met.
Not a solution but some observations that would not fit a comment.
Don't use lists to represent relations. For example, instead of [f1, s1], write requires(f1, s1). If these requirement are fixed, then define requires/2 as a predicate. If you need to identify or enumerate features, consider a feature/1 predicate. For example:
feature(f1).
feature(f2).
...
Same for s1, s2, ... E.g.
support(s1).
support(s2).
...

Speed issue with code

I have an issue with a solution, which very fast becomes slow.
The code below determines if an array of "rules" are valid - and example could be
rules_valid([rule(2,[1,2,3]), rule(2,[1,2,3])],[])
which should be false as it is not possible to select 4 (2+2) distinct numbers from the lists, and
rules_valid([rule(2,[1,2,3]), rule(2,[3,4,5])],[])
is hence true.
For very small queries this performs fine, but it becomes slow very fast. Can anyone point me in a direction of how to speed up this code, if possible.
rules_valid([], _).
rules_valid( [ rule(RequiredUserNumber, UserIds) | RemainingRules ], UsedUserIds) :-
n_users_from_rule(RequiredUserNumber, UserIds, UsedUserIds, UpdatedUsedUserIds),
rules_valid(RemainingRules, UpdatedUsedUserIds).
n_users_from_rule(0, _, UsedUserIds, UsedUserIds).
n_users_from_rule(RequiredUserNumber, UserIds, UsedUserIds, UpdatedUsedUserIds) :-
0 < RequiredUserNumber,
UpdatedRequiredUserNumber is RequiredUserNumber - 1,
select(UserId, UserIds, UpdatedUserIds),
not(member(UserId, UsedUserIds)),
n_users_from_rule(UpdatedRequiredUserNumber, UpdatedUserIds, [ UserId | UsedUserIds ] , UpdatedUsedUserIds ).
UPDATE
So, switching to CLPFD for this piece of logic makes that part much faster. But, I cannot seem to wrap my head around how to make the rest of my application use CLPFD also so it will work for the whole application.
I have a list of userRequests:
userRequest(UserId, PrioritizedRequestList)
request(State, PeriodList)
period(FromDay, ToDay)
i.e.
userRequest( 1 , [request(State,[period(1,5)]),request(State,[period(1,2),period(1,5)])] )
and then I have a list of rule groups which is the structure from my problem wrapped in a
ruleGroup(Day, [rules])
So what I do is to change the state of a userRequest to approved is that I take the first request and approve it, and hence removing the userId from all ruleGroups that has the day overlapping the request because that user no longer is able to fulfill the rule that day.
I have big troubles seeing how I can update these domains removing the user from them.
The issue is that I have been working on lists and not domains, and have a lot of logic around them I have to change as well.
Check out CLP(FD) constraints. Many Prolog systems, including SICStus Prolog and also SWI, ship with a powerful constraint called all_distinct/1: It is true iff all variables from the given list can be assigned distinct integers.
For example, let us state your first query in terms of CLP(FD):
?- length(Ls, 4), Ls ins 1..3, all_distinct(Ls).
false.
From this, we see that there is no admissible solution.
In the second case though, we get:
?- length(Ls, 4), Ls ins 1..5, all_distinct(Ls).
Ls = [_G3409, _G3412, _G3415, _G3418],
_G3409 in 1..5,
all_distinct([_G3409, _G3412, _G3415, _G3418]),
_G3412 in 1..5,
_G3415 in 1..5,
_G3418 in 1..5.
i.e., a residual program that is declaratively equivalent to the original query, and from which in this particular case we know that there is indeed a solution. (Note: This is possible here because all_distinct/1 implements domain consistency.)
Hence, in your rule validation, you can write code that uses CLP(FD) constraints to detect inconsistencies, which is typically much more efficient than naive approaches. I leave implementing this translation as an easy exercise.

What would cause Prolog to succeed on a match, but fail when asked to label outputs?

I'm trying to solve a logic puzzle with Prolog, as a learning exercise, and I think I've correctly mapped the problem using the GNU Prolog finite domain solver.
When I run the solve function, Prolog spits back: yes and a list of variables all bounded in the range 0..1 (booleans, as I've so constrained them). The problem is, when I try to add a fd_labeling(Solution) clause, Prolog about faces and spits out: no.
I'm new to this language and I can't seem to find any course of attack to figure out why everything seems to work until I actually ask it to label the answers...
Apparently, you didn't "correctly" map the problem to FD, since you get a "no" when you try to label the variables.
What you do in Constraint Logic Programming is set up a constraint model, where you have variables with a domain (in your case booleans with the domain [0,1]), and a number of constraints between these variables. Each constraint has a propagation rule that tries to achieve consistency for the domains of the variables on which the constraint is posted. Values that are not consistent are removed from the domains. There are several types of consistency, but they have one thing in common: the constraints usually won't by themselves give you a full solution, or even tell you whether there is a solution for the constraint model.
As an example, say you have two variables X and Y, both with domains [1..10], and the constraint X < Y. Then the propagation rule will remove the value 1 from the domain of Y and remove 10 from the domain of X. It will then stop, since the domains are now consistent: for each value in one domain there exists a value in the other domain so that the constraint is fulfilled.
In order to get a solution (where all variables are bound to values), you need to label variables. Each labeling will wake up the constraints attached to the labeled variable, triggering another round of propagation. This will lead to a solution (all variables bound to values, answer: yes) or failure (in each branch of the search tree, some variable ends up with an empty domain, answer: no)
Since each constraint is only aiming for consistency of the domains of the variables on which it is posted, it is possible that an infeasibility that arises from a combination of constraints is not detected during the propagation stage. For example, three variables X,Y,Z with domains [1..2], and pairwise inequality constraints. This seems to have happened with your constraint model.
If you are sure that there must be a solution to the puzzle, then your constraint model contains some infeasibility. Maybe a sharp look at the constraints is already sufficient to spot it.
If you don't see any obvious infeasibility (e.g., some contradicting constraints like the inequality example above), you need to debug your program. If it's possible, don't use a built-in labeling predicate, but write your own. Then you can add some output predicate that allows you to trace what variable was instantiated and what changes in the boolean decision variables this caused or whether it led to a failure.
(#twinterer already gave an explanation, my answer tries to take it from a different angle)
When you enter a query to Prolog what you get back is an answer. Often an answer contains a solution, sometimes it contains several solutions and sometimes it does not contain any solution at all. Quite often these two notions are confused. Let's look at examples with GNU Prolog:
| ?- length(Vs,3), fd_domain_bool(Vs).
Vs = [_#0(0..1),_#19(0..1),_#38(0..1)]
yes
Here, we have an answer that contains 8 solutions. That is:
| ?- length(Vs,3), fd_domain_bool(Vs), fd_labeling(Vs).
Vs = [0,0,0] ? ;
Vs = [0,0,1] ? ;
...
Vs = [1,1,1]
yes
And now another query. That is the example #twinterer referred to.
| ?- length(Vs,3), fd_domain_bool(Vs), fd_all_different(Vs).
Vs = [_#0(0..1),_#19(0..1),_#38(0..1)]
yes
The answer looks the same as before. However, it does no longer contain a solution.
| ?- length(Vs,3), fd_domain_bool(Vs), fd_all_different(Vs), fd_labeling(Vs).
no
Ideally in such a case, the toplevel would not say "yes" but "maybe". In fact, CLP(R), one of the very first constraint systems, did this.
Another way to make this a little bit less mysterious is to show the actual constraints involved. SWI does this:
?- length(Vs,3), Vs ins 0..1, all_different(Vs).
Vs = [_G565,_G568,_G571],
_G565 in 0..1,
all_different([_G565,_G568,_G571]),
_G568 in 0..1,
_G571 in 0..1.
?- length(Vs,3), Vs ins 0..1, all_different(Vs), labeling([], Vs).
false.
So SWI shows you all constraints that have to be satisfied to get a concrete solution. Read SWI's answer as: Yes, there is a solution, provided all this fine print is true!
Alas, the fine print is false.
And yet another way to solve this problem is to get an implementation of all_different/1 with stronger consistency. But this only works in specific cases.
?- length(Vs,3), Vs ins 0..1, all_distinct(Vs).
false.
In the general case you cannot expect a system to maintain global consistency. Reasons:
Maintaining consistency can be very expensive. It is often better to delegate such decisions to labeling. In fact, the simple all_different/1 is often faster than all_distinct/1.
Better consistency algorithms are often very complex.
In the general case, maintaining global consistency is an undecidable problem.

Resources