Hi hopefully someone can help me. I was just wondering if my code below was sufficient in setting up a matrix of 12 x 12 and, assuming the 'constrain(M)' calls all the correct constraints which are defined in rules lower down, labelling each of the rows? It's failing at the moment and I've traced my constraints so I know they all work but didn't know whether it was because I'm calling them outside of the main predicate?
matrix(M) :-
M = [R1,R2,R3,R4,R5,R6,R7,R8,R9,R10,R11,R12],
R1 = [A,B,C,D,E,F,G,H,I,J,K,L],
R2 = [A2,B2,C2,D2,E2,F2,G2,H2,I2,J2,K2,L2],
R3 = [A3,B3,C3,D3,E3,F3,G3,H3,I3,J3,K3,L3],
R4 = [A4,B4,C4,D4,E4,F4,G4,H4,I4,J4,K4,L4],
R5 = [A5,B5,C5,D5,E5,F5,G5,H5,I5,J5,K5,L5],
R6 = [A6,B6,C6,D6,E6,F6,G6,H6,I6,J6,K6,L6],
R7 = [A7,B7,C7,D7,E7,F7,G7,H7,I7,J7,K7,L7],
R8 = [A8,B8,C8,D8,E8,F8,G8,H8,I8,J8,K8,L8],
R9 = [A9,B9,C9,D9,E9,F9,G9,H9,I9,J9,K9,L9],
R10 = [A10,B10,C10,D10,E10,F10,G10,H10,I10,J10,K10,L10],
R11 = [A11,B11,C11,D11,E11,F11,G11,H11,I11,J11,K11,L11],
R12 = [A12,B12,C12,D12,E12,F12,G12,H12,I12,J12,K12,L12],
constrain(M),
labeling([],R1),
labeling([],R2),
labeling([],R3),
labeling([],R4),
labeling([],R5),
labeling([],R6),
labeling([],R7),
labeling([],R8),
labeling([],R9),
labeling([],R10),
labeling([],R11),
labeling([],R12).
You should always separate the constraint posting from the actual search (labeling/2).
The reason is clear: It can often be extremely expensive to search for concrete solutions. Posting the constraints, on the other hand, is often very fast.
If, as in your case, the two parts are uncleanly mixed, you cannot tell easily which part is responsible if there are unexpected problems such as nontermination.
In your case, the only thing you should improve in the main predicate is enforcing said separation between constraint posting and search.
The mistake that causes unexpected failure is most likely contained in one of the rules you did not post here. You can find out which rules are involved in the failure by systematically replacing the goals in which they are called by true. Thus, there's no need for tracing: You can debug CLP(FD) programs declaratively in this way.
EDIT: Here is more information about the separation between posting constraints and the search for concrete solutions. As introduced in GUPU, we will use the notion of core relation, which has the following properties:
By convention, its name ends with an underscore _.
Also by convention, its last argument is the list of variables that need to be labeled.
It posts the CLP(FD) constraints. This is also called the (constraint) modeling part or (constraint) model.
It doesn't use labeling/2.
The search part is usually performed by label/1 or labeling/2.
Suppose you have a predicate where you intermingle these two aspects, such as in your current case:
matrix(M) :-
constraints_hold(M),
... relate M to variables Vs ...
labeling(Strategy, Vs).
Obviously, for the reasons explained above, the call of labeling/2 is the part we want to remove from this predicate. Of course, as you observe, we still want to somehow access the variables that are supposed to be labeled.
We do this as follows:
We introduce a new argument to the core relation to pass around the list of finite domain variables that need to be labeled.
By convention, we reflect the additional argument by appending an underscore (_) to the predicate name.
So, we obtain the following core relation:
matrix_(M, Vs) :-
constraints_hold(M),
... relate M to variables Vs ...
The only missing part (which you haven't done yet, but which you should have done in any case), is stating the relation between the object of interest (in this case: the matrix) and the finite domain variables. This is the part I leave as a simple exercise for you. Hint: append/2.
Once you have done all this, you can solve the whole task by combining the core relation and labeling/2 in a single query or predicate:
?- matrix_(M, Vs), labeling(Strategy, Vs).
Note that this separation between core relation and search:
makes it extremely easy to try different labeling strategies without recompiling your program.
allows you to determine important procedural properties of the core relation without needing to search for concrete solutions.
Use the introduction and explanation of this important separation as an indicator when judging the quality of any text about CLP(FD) constraints.
Hello all :) I'm pretty new to Optimization and barely understand it (was about ready to slit my wrist after figuring out how to write Objective Functions without any formal learning on the matter), and need a little help on a work project.
How would I go about setting a logical constraint when using the Optimization Toolbox, fmincon specifically (using Trust Region Reflective algorithm)?
I am optimizing 5 values (lets call it matrix OptMat), and I want to optimize with the constraint such that
max(OptMat)/min(OptMat) > 10
I assume this will optimize the 5 values of OptMat as low as possible, while keeping the above constraint in mind so that if a set of values for OptMat is found with a lower OF in which it breaks the constraint it will NOT report those values and instead report the next lowest OF where OptMat values meet the above constraint?
For the record, my lower bounds are [0,0,0,0,0]. I'm not sure how to enter it into upper bounds as it only accepts doubles and that would be logical. I tried the Active Set Algorithm and that enabled the Nonlinear Constraint Function box and I think I'm on the right track with that. If so, I'm not sure what the syntax for entering my desired constraint. Another method^that ^may ^or ^may ^not ^work I could think of is using this as an Upper Boundary.
[min(OptMat)*10, min(OptMat)*10, min(OptMat)*10, min(OptMat)*10, min(OptMat)*10]
Again, I'm using the GUI Optimization Toolbox. I haven't looked too much into command line optimization (though I will need to write it command line eventually) and I think I read somewhere that you can set the Upper Boundary and it does not have to be double?
Thank you so very much for the help, if someone is able. I apologize if this is a really nooby question.
What you are looking for are nonlinear constraints, fmincon can handle it (I only know the command, not the GUI) with the argument nonlcon. For more information look at this guide http://de.mathworks.com/help/optim/ug/fmincon.html
How would you implement this? First create a function
function [c, ceq] = mycondition(x)
c = -max(x)/min(x)/10;
ceq = 0;
I had to change the equation to match the correct formalism, i.e. c(x)<=0 is needed.
Maybe you could also create an anonymous function, I'm not sure (http://de.mathworks.com/help/matlab/matlab_prog/anonymous-functions.html).
Then use this function to feed the fmincon function using the # sign, i.e. at the specific location write
fmincon(...., #mycondition, ...)
I'm creating several puzzle solvers in Prolog SWI with CHR (Constraint Handling Rules)
Everything works great but, I like to test which solver is best one.
Therefore I like to find out, which solver uses the least amount of backtracks.
Is there a clever way to find out (or print out), the amount of backtracks that the solver had needed for solving a particular puzzle?
Logically, counting would help, but it doesn't --> backtracking ! <-- .
Also, printing a new line on the screen isn't effective, because of SWI's GUI. You can't print more than +/- 50 lines and can't select properly
It is indeed not trivial to accomplish this, given Constraint Handling Rules maintain a 'constraint store' and execution of rules may add, rewrite or remove rules from this store at runtime. This changes the state of the program and makes it somewhat difficult to keep track of global states throughout execution.
However, since CHR is integrated in SWI, you can make use of the non-logical operation nb_setarg/3 to keep count of the backtracks.
Notes from the doc:
Compatible with GNU-Prolog's setarg(A,T,V,false)
This implementation is thread-safe, reentrant and capable of handling exceptions
EDIT
As regarding where to count the backtracks, this of course depends on your program, but will usually occur in the CHR constraint rule that defines the fail condition of your search, allowing it to 'branch' (= rewrite CHR rules). Every time a rewrite of the constraint store occurs during search, it represents a backtrack and you can increase a counter accordingly using the operation as defined above.
Consider a small, abstract example:
invalid_state ==> increment_backtracks, fail.
guess <=> branch
Often we would like to refactor a context-free grammar to remove left-recursion. There are numerous algorithms to implement such a transformation; for example here or here.
Such algorithms will restructure a grammar regardless of the presence of left-recursion. This has negative side-effects, such as producing different parse trees from the original grammar, possibly with different associativity. Ideally a grammar would only be transformed if it was absolutely necessary.
Is there an algorithm or tool to identify the presence of left recursion within a grammar? Ideally this might also classify subsets of production rules which contain left recursion.
There is a standard algorithm for identifying nullable non-terminals, which runs in time linear in the size of the grammar (see below). Once you've done that, you can construct the relation A potentially-starts-with B over all non-terminals A, B. (In fact, it's more normal to construct that relationship over all grammatical symbols, since it is also used to construct FIRST sets, but in this case we only need the projection onto non-terminals.)
Having done that, left-recursive non-terminals are all A such that A potentially-starts-with+ A, where potentially-starts-with+ is:
potentially-starts-with ∘ potentially-starts-with*
You can use any transitive closure algorithm to compute that relation.
For reference, to detect nullable non-terminals.
Remove all useless symbols.
Attach a pointer to every production, initially at the first position.
Put all the productions into a workqueue.
While possible, find a production to which one of the following applies:
If the left-hand-side of the production has been marked as an ε-non-terminal, discard the production.
If the token immediately to the right of the pointer is a terminal, discard the production.
If there is no token immediately to the right of the pointer (i.e., the pointer is at the end) mark the left-hand-side of the production as an ε-non-terminal and discard the production.
If the token immediately to the right of the pointer is a non-terminal which has been marked as an ε-non-terminal, advance the pointer one token to the right and return the production to the workqueue.
Once it is no longer possible to select a production from the work queue, all ε-non-terminals have been identified.
Just for fun, a trivial modification of the above algorithm can be used to do step 1. I'll leave it as an exercise (it's also an exercise in the dragon book). Also left as an exercise is the way to make sure the above algorithm executes in linear time.
I'm trying to solve a logic puzzle with Prolog, as a learning exercise, and I think I've correctly mapped the problem using the GNU Prolog finite domain solver.
When I run the solve function, Prolog spits back: yes and a list of variables all bounded in the range 0..1 (booleans, as I've so constrained them). The problem is, when I try to add a fd_labeling(Solution) clause, Prolog about faces and spits out: no.
I'm new to this language and I can't seem to find any course of attack to figure out why everything seems to work until I actually ask it to label the answers...
Apparently, you didn't "correctly" map the problem to FD, since you get a "no" when you try to label the variables.
What you do in Constraint Logic Programming is set up a constraint model, where you have variables with a domain (in your case booleans with the domain [0,1]), and a number of constraints between these variables. Each constraint has a propagation rule that tries to achieve consistency for the domains of the variables on which the constraint is posted. Values that are not consistent are removed from the domains. There are several types of consistency, but they have one thing in common: the constraints usually won't by themselves give you a full solution, or even tell you whether there is a solution for the constraint model.
As an example, say you have two variables X and Y, both with domains [1..10], and the constraint X < Y. Then the propagation rule will remove the value 1 from the domain of Y and remove 10 from the domain of X. It will then stop, since the domains are now consistent: for each value in one domain there exists a value in the other domain so that the constraint is fulfilled.
In order to get a solution (where all variables are bound to values), you need to label variables. Each labeling will wake up the constraints attached to the labeled variable, triggering another round of propagation. This will lead to a solution (all variables bound to values, answer: yes) or failure (in each branch of the search tree, some variable ends up with an empty domain, answer: no)
Since each constraint is only aiming for consistency of the domains of the variables on which it is posted, it is possible that an infeasibility that arises from a combination of constraints is not detected during the propagation stage. For example, three variables X,Y,Z with domains [1..2], and pairwise inequality constraints. This seems to have happened with your constraint model.
If you are sure that there must be a solution to the puzzle, then your constraint model contains some infeasibility. Maybe a sharp look at the constraints is already sufficient to spot it.
If you don't see any obvious infeasibility (e.g., some contradicting constraints like the inequality example above), you need to debug your program. If it's possible, don't use a built-in labeling predicate, but write your own. Then you can add some output predicate that allows you to trace what variable was instantiated and what changes in the boolean decision variables this caused or whether it led to a failure.
(#twinterer already gave an explanation, my answer tries to take it from a different angle)
When you enter a query to Prolog what you get back is an answer. Often an answer contains a solution, sometimes it contains several solutions and sometimes it does not contain any solution at all. Quite often these two notions are confused. Let's look at examples with GNU Prolog:
| ?- length(Vs,3), fd_domain_bool(Vs).
Vs = [_#0(0..1),_#19(0..1),_#38(0..1)]
yes
Here, we have an answer that contains 8 solutions. That is:
| ?- length(Vs,3), fd_domain_bool(Vs), fd_labeling(Vs).
Vs = [0,0,0] ? ;
Vs = [0,0,1] ? ;
...
Vs = [1,1,1]
yes
And now another query. That is the example #twinterer referred to.
| ?- length(Vs,3), fd_domain_bool(Vs), fd_all_different(Vs).
Vs = [_#0(0..1),_#19(0..1),_#38(0..1)]
yes
The answer looks the same as before. However, it does no longer contain a solution.
| ?- length(Vs,3), fd_domain_bool(Vs), fd_all_different(Vs), fd_labeling(Vs).
no
Ideally in such a case, the toplevel would not say "yes" but "maybe". In fact, CLP(R), one of the very first constraint systems, did this.
Another way to make this a little bit less mysterious is to show the actual constraints involved. SWI does this:
?- length(Vs,3), Vs ins 0..1, all_different(Vs).
Vs = [_G565,_G568,_G571],
_G565 in 0..1,
all_different([_G565,_G568,_G571]),
_G568 in 0..1,
_G571 in 0..1.
?- length(Vs,3), Vs ins 0..1, all_different(Vs), labeling([], Vs).
false.
So SWI shows you all constraints that have to be satisfied to get a concrete solution. Read SWI's answer as: Yes, there is a solution, provided all this fine print is true!
Alas, the fine print is false.
And yet another way to solve this problem is to get an implementation of all_different/1 with stronger consistency. But this only works in specific cases.
?- length(Vs,3), Vs ins 0..1, all_distinct(Vs).
false.
In the general case you cannot expect a system to maintain global consistency. Reasons:
Maintaining consistency can be very expensive. It is often better to delegate such decisions to labeling. In fact, the simple all_different/1 is often faster than all_distinct/1.
Better consistency algorithms are often very complex.
In the general case, maintaining global consistency is an undecidable problem.