How to implement This MP problem in ECLIPSE CLP or Prolog? - prolog

I want to implement this Summations as Objective and Constraints(1-6)
Could anyone help me that how I can Implement them?
OBJ: Min ∑(i=1..N)∑(j=1..N) Cij * ∑(k=1..K)Xijk
constraint :
∑(k=1..K) Yik=1 (for all i in N)

The following answer is specific to ECLiPSe (it uses loops, array and array slice notation, which are not part of standard Prolog).
I assume that N and K (and presumably C) are given, and your matrices are declared as
dim(C, [N,N]),
dim(X, [N,N,K]),
dim(Y, [N,K]),
You can then set up the constraints in a loop:
constraint : ∑(k=1..K) Yik=1 (for all i in N)
( for(I,1,N), param(Y) do
sum(Y[I,*]) $= 1
),
Note that the notation sum(Y[I,*]) here is a shorthand for sum([Y[I,1],Y[I,2],...,Y[I,K]]) when K is the size of this array dimension.
For your objective, because of the nested sum, an auxiliary loop/list is still necessary:
OBJ: Min ∑(i=1..N)∑(j=1..N) Cij * ∑(k=1..K)Xijk
( multifor([I,J],1,N), foreach(Term,Terms), param(C,X) do
Term = (C[I,J] * sum(X[I,J,*]))
),
Objective = sum(Terms),
...
You then have to pass this objective expression to the solver -- the details depend on which solver you use (e.g. eplex, ic).

Related

Combine boolean and integer logic in linear arithmetic using the Z3 Solver?

I would like to solve problems combining boolean and integer logic in linear arithmetic with a SAT/SMT solver. At first glance, Z3 seems promising.
First of all, is it at all possible to solve the following problem? This answer makes it seem like it works.
int x,y,z
boolean a,b,c
( (3x + y - 2z >= 10) OR (A AND (NOT B OR C)) OR ((A == C) AND (x + y >= 5)) )
If so, how does Z3 solve this kind of problem in theory and is there any documentation about it?
I could think of two ways to solve this problem. One would be to convert the Boolean operations into a linear integer expression. Another solution I read about is to use the Nelson-Oppen Combination Method described in [Kro 08].
I found a corresponding documentation in chapter 3.2.2. Solving Arithmetical Fragments, Table 1 a listing of the implemented algorithms for a certain logic.
Yes, SMT solvers are quite good at solving problems of this sort. Your problem can be expressed using z3's Python interface like this:
from z3 import *
x, y, z = Ints('x y z')
A, B, C = Bools('A B C')
solve (Or(3*x + y - 2*z >= 10
, And(A, Or(Not(B), C))
, And(A == C, x + y >= 5)))
This prints:
[A = True, z = 3, y = 0, B = True, C = True, x = 5]
giving you a (not necessarily "the") model that satisfies your constraints.
SMT solvers can deal with integers, machine words (i.e., bit-vectors), reals, along with many other data types, and there are efficient procedures for combinations of linear-integer-arithmetic, booleans, uninterpreted-functions, bit-vectors amongst many others.
See http://smtlib.cs.uiowa.edu for many resources on SMT solving, including references to other work. Any given solver (i.e., z3, yices, cvc etc.) will be a collection of various algorithms, heuristics and tactics. It's hard to compare them directly as each shine in their own way for certain sublogics, but for the base set of linear-integer arithmetic, booleans, and bit-vectors, they should all perform fairly well. Looks like you already found some good references, so you can do further reading as necessary; though for most end users it's neither necessary nor that important to know how an SMT solver internally works.

How to implement a union-find (disjoint set) data structure in Coq?

I am quite new to Coq, but for my project I have to use a union-find data structure in Coq. Are there any implementations of the union-find (disjoint set) data structure in Coq?
If not, can someone provide an implementation or some ideas? It doesn't have to be very efficient. (no need to do path compression or all the fancy optimizations) I just need a data structure that can hold an arbitrary data type (or nat if it's too hard) and perform: union and find.
Thanks in advance
If all you need is a mathematical model, with no concern for actual performance, I would go for the most straightforward one: a functional map (finite partial function) in which each element optionally links to another element with which it has been merged.
If an element links to nothing, then its canonical representative is itself.
If an element links to another element, then its canonical representative is the canonical representative of that other element.
Note: in the remaining of this answer, as is standard with union-find, I will assume that elements are simply natural numbers. If you want another type of elements, simply have another map that binds all elements to unique numbers.
Then you would define a function find : UnionFind → nat → nat that returns the canonical representative of a given element, by following links as long as you can. Notice that the function would use recursion, whose termination argument is not trivial. To make it happen, I think that the easiest way is to maintain the invariant that a number only links to a lesser number (i.e. if i links to j, then i > j). Then the recursion terminates because, when following links, the current element is a decreasing natural number.
Defining the function union : UnionFind → nat → nat → UnionFind is easier: union m i j simply returns an updated map with max i' j' linking to min i' j', where i' = find m i and j' = find m j.
[Side note on performance: maintaining the invariant means that you cannot adequately choose which of a pair of partitions to merge into the other, based on their ranks; however you can still implement path compression if you want!]
As for which data structure exactly to use for the map: there are several available.
The standard library (look under the title FSets) has several implementations (FMapList, FMapPositive and so on) satisfying the interface FMapInterface.
The stdpp libray has gmap.
Again if performance is not a concern, just pick the simplest encoding or, more importantly, the one that makes your proofs the simplest. I am thinking of just a list of natural numbers.
The positions of the list are the elements in reverse order.
The values of the list are offsets, i.e. the number of positions to skip forward in order to reach the target of the link.
For an element i linking to j (i > j), the offset is i − j.
For a canonical representative, the offset is zero.
With my best pseudo-ASCII-art skills, here is a map where the links are { 6↦2, 4↦2, 3↦0, 2↦1 } and the canonical representatives are { 5, 1, 0 }:
6 5 4 3 2 1 0 element
↓ ↓ ↓ ↓ ↓ ↓ ↓
/‾‾‾‾‾‾‾‾‾↘
[ 4 ; 0 ; 2 ; 3 ; 1 ; 0 ; 0 ] map
\ \____↗↗ \_↗
\___________/
The motivation is that the invariant discussed above is then enforced structurally. Hence, there is hope that find could actually be defined by structural induction (on the structure of the list), and have termination for free.
A related paper is: Sylvain Conchon and Jean-Christophe Filliâtre. A Persistent Union-Find Data Structure. In ACM SIGPLAN Workshop on ML.
It describes the implementation of an efficient union-find data structure in ML, that is persistent from the user perspective, but uses mutation internally. What may be more interesting for you, is that they prove it correct in Coq, which implies that they have a Coq model for union-find. However, this model reflects the memory store for the imperative program that they seek to prove correct. I’m not sure how applicable it is to your problem.
Maëlan has a good answer, but for an even simpler and more inefficient disjoint set data structure, you can just use functions to nat to represent them. This avoids any termination stickiness. In essence, the preimages of any total function form disjoint sets over the domain. Another way of looking at this is as representing any disjoint set G as the curried application find_root G : nat -> nat since find_root is the essential interface that disjoint sets provide.
This is also analogous to using functions to represent Maps in Coq like in Software Foundations. https://softwarefoundations.cis.upenn.edu/lf-current/Maps.html
Require Import Arith.
Search eq_nat_decide.
(* disjoint set *)
Definition ds := nat -> nat.
Definition init_ds : ds := fun x => x.
Definition find_root (g : ds) x := g x.
Definition in_same_set (g : ds) x y :=
eq_nat_decide (g x) (g y).
Definition union (g : ds) x y : ds :=
fun z =>
if in_same_set g x z
then find_root g y
else find_root g z.
You can also make it generic over the type held in the disjoint set like so
Definition ds (a : Type) := a -> nat.
Definition find_root {a} (g : ds a) x := g x.
Definition in_same_set {a} (g : ds a) x y :=
eq_nat_decide (g x) (g y).
Definition union {a} (g : ds a) x y : ds a :=
fun z =>
if in_same_set g x z
then find_root g y
else find_root g z.
To initialize the disjoint set for a particular a, you need an Enum instance for your type a basically.
Definition init_bool_ds : ds bool := fun x => if x then 0 else 1.
You may want to trade out eq_nat_decide for eqb or some other roughly equivalent thing depending on your proof style and needs.

A concise way to access an element of a set in pseudocode

Let networklocs be a set of elements of the form (n, t, l) where n is a node in the network, t is a clock tick and l is the location of n at time t. How can I get (in a concise way in pseudo-code) the element of networklocs where node and time is given?
I know I can write a function like
getElement(ni,t)
for all (nj,t',l') in networklocs
if nj=ni and t'= t then return (nj,t',l')
But is there a more concise way to access an element of the set networklocs in the pseudo-code?
Note that I would like to keep networklocs as a set, so solutions with maps or arrays do not fit.
Note that returning ni and t is useless because they're already known. In notation and practice, you'd want
Let M be a map: <N, T> -> L
The operation you want is just a map lookup:
l <- M <n, t>
Although the map is the most likely notation, a predicate logic expression can also be used:
Let get(n,t) be x = <n, t, l> | x \in networklocs
The loop you provided as an example is neither correct nor pseudocode. It's a concrete implementation of a map, and it doesn't say what to do when the key is not found.

Evaluating three-variable expression in Prolog

Follow the Four-Step Abstract design process to define recursive rules to compute mathematical functions. You must indicate (use comments to code) which step is used. Note, a Prolog rule does not return a value. You need to use a parameter to hold the return value. You may NOT use the exponential operator ** to compute the expressions.
Write a recursive rules factbar(F, X, Y, N) to compute F = ((2*X + Y)^N)! (factorial of expbar). The rule must call (use) the rule expbar that you designed..
Now for doing this operation F = ((2*X + Y)^N) I have already written my code but I do not know how to write factorial in Prolog:
expbar(R, X, Y, N) :-
X > 0, Y > 0, N > 0,
R is (2 * X + Y) ** N.
Although I have used ** in my program for exponent I did not know how to use the other way.
I have no idea what the "four step abstract design process" is and you haven't included that detail. As a result, you're going to instead get my two-step recursive function design process. Your predicate is right except you haven't defined pow/3, a function to compute powers. This is obviously the crux of your assignment. Let's do it.
Step one: identify your base cases. With arithmetic functions, the base case involves the arithmetic identity. For exponentiation, the identity is 1. In other words, X**1 = X. Write this down:
pow(X,1,X).
Because this is a function with two inputs and one result, we'll encode it as an arity-3 predicate. This fact simply says X to the 1st power is X.
Step two. Now consider the inductive case. If I have X**N, I can expand it to X * (X**(N-1)). By the definition of exponentiation and the induction rule, this completes the definition of the predicate. Encode it in Prolog syntax:
pow(X,N,Y) :-
N > 1,
succ(N0, N),
pow(X, N0, Y0),
Y is X * Y0, !.
This gives you a predicate for calculating exponents. If you replace your use of **/2 in your expbar/4 predicate, you fulfill the requirements of your assignment.

Best way to do an iteration scheme

I hope this hasn't been asked before, if so I apologize.
EDIT: For clarity, the following notation will be used: boldface uppercase for matrices, boldface lowercase for vectors, and italics for scalars.
Suppose x0 is a vector, A and B are matrix functions, and f is a vector function.
I'm looking for the best way to do the following iteration scheme in Mathematica:
A0 = A(x0), B0=B(x0), f0 = f(x0)
x1 = Inverse(A0)(B0.x0 + f0)
A1 = A(x1), B1=B(x1), f1 = f(x1)
x2 = Inverse(A1)(B1.x1 + f1)
...
I know that a for-loop can do the trick, but I'm not quite familiar with Mathematica, and I'm concerned that this is the most efficient way to do it. This is a justified concern as I would like to define a function u(N):=xNand use it in further calculations.
I guess my questions are:
What's the most efficient way to program the scheme?
Is RecurrenceTable a way to go?
EDIT
It was a bit more complicated than I tought. I'm providing more details in order to obtain a more thorough response.
Before doing the recurrence, I'm having problems understanding how to program the functions A, B and f.
Matrices A and B are functions of the time step dt = 1/T and the space step dx = 1/M, where T and M are the number of points in the {0 < x < 1, 0 < t} region. This is also true for vector the function f.
The dependance of A, B and f on x is rather tricky:
A and B are upper and lower triangular matrices (like a tridiagonal matrix; I suppose we can call them multidiagonal), with defined constant values on their diagonals.
Given a point 0 < xs < 1, I need to determine it's representative xn in the mesh (the closest), and then substitute the nth row of A and B with the function v( x) (transposed, of course), and the nth row of f with the function w( x).
Summarizing, A = A(dt, dx, xs, x). The same is true for B and f.
Then I need do the loop mentioned above, to define u( x) = step[T].
Hope I've explained myself.
I'm not sure if it's the best method, but I'd just use plain old memoization. You can represent an individual step as
xstep[x_] := Inverse[A[x]](B[x].x + f[x])
and then
u[0] = x0
u[n_] := u[n] = xstep[u[n-1]]
If you know how many values you need in advance, and it's advantageous to precompute them all for some reason (e.g. you want to open a file, use its contents to calculate xN, and then free the memory), you could use NestList. Instead of the previous two lines, you'd do
xlist = NestList[xstep, x0, 10];
u[n_] := xlist[[n]]
This will break if n > 10, of course (obviously, change 10 to suit your actual requirements).
Of course, it may be worth looking at your specific functions to see if you can make some algebraic simplifications.
I would probably write a function that accepts A0, B0, x0, and f0, and then returns A1, B1, x1, and f1 - say
step[A0_?MatrixQ, B0_?MatrixQ, x0_?VectorQ, f0_?VectorQ] := Module[...]
I would then Nest that function. It's hard to be more precise without more precise information.
Also, if your procedure is numerical, then you certainly don't want to compute Inverse[A0], as this is not a numerically stable operation. Rather, you should write
A0.x1 == B0.x0+f0
and then use a numerically stable solver to find x1. Of course, Mathematica's LinearSolve provides such an algorithm.

Resources