Definition of Sets in ocaml - set

I have a problem with the creation of a collection containing heterogeneous elements, in particular element will be structured as follows:
(a,1), ((a,1),1)), ((a,1),1),1) and so on....
can I do this using the module Set of ocaml?
moreover is there also some function that allows me to make the Cartesian product between sets (also heterogeneous ) ?

You cannot build sets of heterogeneous elements. Of course you can define a type to unify the types if you know them in advance. It looks like you do, and it may be the recursive type defined by:
type ('a,'b) r = | L of 'a
| N of (('a,'b) r * 'b)
Thus, your examples would constructed as,
N (L a,1)
N ( N (L a,1),1)
N ( N ( N (L a,1),1),1)
Then you would just build the Ordered module to encompass the compare function.
In the case of creating the Cartesian product, you wouldn't be dealing with heterogeneous elements at this point, but a tuple of the previous type. This would require a new Ordered module to deal with those compares.

No, from http://caml.inria.fr/pub/docs/manual-ocaml/libref/Set.S.html you can see that the sets in module Set are homogeneous.
You can use the approach descibed in http://alan.petitepomme.net/cwn/2010.02.09.html for dictionaries instead.

Related

How to implement a union-find (disjoint set) data structure in Coq?

I am quite new to Coq, but for my project I have to use a union-find data structure in Coq. Are there any implementations of the union-find (disjoint set) data structure in Coq?
If not, can someone provide an implementation or some ideas? It doesn't have to be very efficient. (no need to do path compression or all the fancy optimizations) I just need a data structure that can hold an arbitrary data type (or nat if it's too hard) and perform: union and find.
Thanks in advance
If all you need is a mathematical model, with no concern for actual performance, I would go for the most straightforward one: a functional map (finite partial function) in which each element optionally links to another element with which it has been merged.
If an element links to nothing, then its canonical representative is itself.
If an element links to another element, then its canonical representative is the canonical representative of that other element.
Note: in the remaining of this answer, as is standard with union-find, I will assume that elements are simply natural numbers. If you want another type of elements, simply have another map that binds all elements to unique numbers.
Then you would define a function find : UnionFind → nat → nat that returns the canonical representative of a given element, by following links as long as you can. Notice that the function would use recursion, whose termination argument is not trivial. To make it happen, I think that the easiest way is to maintain the invariant that a number only links to a lesser number (i.e. if i links to j, then i > j). Then the recursion terminates because, when following links, the current element is a decreasing natural number.
Defining the function union : UnionFind → nat → nat → UnionFind is easier: union m i j simply returns an updated map with max i' j' linking to min i' j', where i' = find m i and j' = find m j.
[Side note on performance: maintaining the invariant means that you cannot adequately choose which of a pair of partitions to merge into the other, based on their ranks; however you can still implement path compression if you want!]
As for which data structure exactly to use for the map: there are several available.
The standard library (look under the title FSets) has several implementations (FMapList, FMapPositive and so on) satisfying the interface FMapInterface.
The stdpp libray has gmap.
Again if performance is not a concern, just pick the simplest encoding or, more importantly, the one that makes your proofs the simplest. I am thinking of just a list of natural numbers.
The positions of the list are the elements in reverse order.
The values of the list are offsets, i.e. the number of positions to skip forward in order to reach the target of the link.
For an element i linking to j (i > j), the offset is i − j.
For a canonical representative, the offset is zero.
With my best pseudo-ASCII-art skills, here is a map where the links are { 6↦2, 4↦2, 3↦0, 2↦1 } and the canonical representatives are { 5, 1, 0 }:
6 5 4 3 2 1 0 element
↓ ↓ ↓ ↓ ↓ ↓ ↓
/‾‾‾‾‾‾‾‾‾↘
[ 4 ; 0 ; 2 ; 3 ; 1 ; 0 ; 0 ] map
\ \____↗↗ \_↗
\___________/
The motivation is that the invariant discussed above is then enforced structurally. Hence, there is hope that find could actually be defined by structural induction (on the structure of the list), and have termination for free.
A related paper is: Sylvain Conchon and Jean-Christophe Filliâtre. A Persistent Union-Find Data Structure. In ACM SIGPLAN Workshop on ML.
It describes the implementation of an efficient union-find data structure in ML, that is persistent from the user perspective, but uses mutation internally. What may be more interesting for you, is that they prove it correct in Coq, which implies that they have a Coq model for union-find. However, this model reflects the memory store for the imperative program that they seek to prove correct. I’m not sure how applicable it is to your problem.
Maëlan has a good answer, but for an even simpler and more inefficient disjoint set data structure, you can just use functions to nat to represent them. This avoids any termination stickiness. In essence, the preimages of any total function form disjoint sets over the domain. Another way of looking at this is as representing any disjoint set G as the curried application find_root G : nat -> nat since find_root is the essential interface that disjoint sets provide.
This is also analogous to using functions to represent Maps in Coq like in Software Foundations. https://softwarefoundations.cis.upenn.edu/lf-current/Maps.html
Require Import Arith.
Search eq_nat_decide.
(* disjoint set *)
Definition ds := nat -> nat.
Definition init_ds : ds := fun x => x.
Definition find_root (g : ds) x := g x.
Definition in_same_set (g : ds) x y :=
eq_nat_decide (g x) (g y).
Definition union (g : ds) x y : ds :=
fun z =>
if in_same_set g x z
then find_root g y
else find_root g z.
You can also make it generic over the type held in the disjoint set like so
Definition ds (a : Type) := a -> nat.
Definition find_root {a} (g : ds a) x := g x.
Definition in_same_set {a} (g : ds a) x y :=
eq_nat_decide (g x) (g y).
Definition union {a} (g : ds a) x y : ds a :=
fun z =>
if in_same_set g x z
then find_root g y
else find_root g z.
To initialize the disjoint set for a particular a, you need an Enum instance for your type a basically.
Definition init_bool_ds : ds bool := fun x => if x then 0 else 1.
You may want to trade out eq_nat_decide for eqb or some other roughly equivalent thing depending on your proof style and needs.

How to implement This MP problem in ECLIPSE CLP or Prolog?

I want to implement this Summations as Objective and Constraints(1-6)
Could anyone help me that how I can Implement them?
OBJ: Min ∑(i=1..N)∑(j=1..N) Cij * ∑(k=1..K)Xijk
constraint :
∑(k=1..K) Yik=1 (for all i in N)
The following answer is specific to ECLiPSe (it uses loops, array and array slice notation, which are not part of standard Prolog).
I assume that N and K (and presumably C) are given, and your matrices are declared as
dim(C, [N,N]),
dim(X, [N,N,K]),
dim(Y, [N,K]),
You can then set up the constraints in a loop:
constraint : ∑(k=1..K) Yik=1 (for all i in N)
( for(I,1,N), param(Y) do
sum(Y[I,*]) $= 1
),
Note that the notation sum(Y[I,*]) here is a shorthand for sum([Y[I,1],Y[I,2],...,Y[I,K]]) when K is the size of this array dimension.
For your objective, because of the nested sum, an auxiliary loop/list is still necessary:
OBJ: Min ∑(i=1..N)∑(j=1..N) Cij * ∑(k=1..K)Xijk
( multifor([I,J],1,N), foreach(Term,Terms), param(C,X) do
Term = (C[I,J] * sum(X[I,J,*]))
),
Objective = sum(Terms),
...
You then have to pass this objective expression to the solver -- the details depend on which solver you use (e.g. eplex, ic).

Ocaml homework need some advices

We have N sets of integers A1, A2, A3 ... An. Find an algorithm that returns a list containg one element from each of the sets, with the property that the difference between the largest and the smallest element in the list is minimal
Example:
IN: A1 = [0,4,9], A2 = [2,6,11], A3 = [3,8,13], A4 = [7,12]
OUT: [9,6,8,7]
I have an idea about this exercise, first we need sort all the elements on one list(every element need to be assigned to its set), so with that input we get this:
[[0,1],[2,2],[3,3],[4,1],[6,2],[7,4],[8,3],[9,1],[11,2],[12,4],[13,3]]
later on we create all possible list and find this one with the difference between smallest and largest element, and return correct out like this: [9,6,8,7]
I am newbie in ocaml so I have some questions about coding this stuff:
Can I create a function with N(infinite amount of) arguments?
Should I create a new type, like list of pair to realize assumptions?
Sorry for my bad english, hope you will understand what I wanted to express.
This answer is about the algorithmic part, not the OCaml code.
You might want to implement your proposed solution first, to have a working one and to compare its results with an improved solution, which I now write about.
Here is a hint about how to improve the algorithmic part. Consider sorting all sets, not only the first one. Now, the list of all minimum elements from all sets is a candidate to the output.
To consider other candidate output, how can you move from there?
I'm just going to answer your questions, rather than comment on your proposed solution. (But I think you'll have to work on it a little more before you're done.)
You can write a function that takes a list of lists. This is pretty much the same
as allowing an arbitrary number of arguments. But really it just has one argument
(like all functions in OCaml).
You can just use built-in types like lists and tuples, you don't need to create or
declare them explicitly.
Here's an example function that takes a list of lists and combines them into one big long list:
let rec concat lists =
match lists with
| [] -> []
| head :: tail -> head # concat tail
Here is the routine you described in the question to get you started. Note that
I did not pay any attention to efficiency. Also added the reverse apply (pipe)
operator for clarity.
let test_set = [[0;4;9];[2;6;11];[3;8;13]; [7;12]]
let (|>) g f = f g
let linearize sets =
let open List in sets
|> mapi (fun i e -> e |> map (fun x -> (x, i+1) ))
|> flatten |> sort (fun (e1,_) (e2, _) -> compare e1 e2)
let sorted = linearize test_set
Your approach does not sound very efficient, with an n number of sets, each with x_i elments, your sorted list will have (n * x_i) elements, and the number of sub-lists you can generate out of that would be: (n * x_i)! (factorial)
I'd like to propose a different approach, but you'll have to work out the details:
Tag (index) each element with it's set identifier (like you have done).
Sort each set individually.
Build the exact opposite to that of your desired result!
Optimize!
I hope you can figure out steps 3, 4 on your own... :)

Default way of executing code in Haskell

In the following generalized code:
nat = [1..xmax]
xmax = *insert arbitrary Integral value here*
setA = [2*x | x <- nat]
setB = [3*x | x <- nat]
setC = [4*x | x <- nat]
setD = [5*x | x <- nat]
setOne = setA `f` setB
setTwo = setC `f` setD
setAll = setOne ++ setTwo
setAllSorted = quicksort setAll
(please note that 'f' stands for a function of type
f :: Integral a => [a] -> [a] -> [a]
that is not simply ++)
how does Haskell handle attempting to print setAllSorted?
does it get the values for setA and setB, compute setOne and then only keep the values for setOne in memory (before computing everything else)?
Or does Haskell keep everything in memory until having gotten the value for setAllSorted?
If the latter is the case then how would I specify (using main, do functions and all that other IO stuff) that it do the former instead?
Can I tell the program in which order to compute and garbage collect? If so, how would I do that?
The head of setAllSorted is necessarily less-than-or-equal to every element in the tail. Therefore, in order to determine the head, all of setOne and setTwo must be computed. Furthermore, since all of the sets are constant applicative forms, I believe they will not be garbage collected after being computed. The numbers themselves will likely be shared between the sets, but the cons nodes that glue them together will likely not be (your luck with some will depend upon the definition of f).
Due to laziness, Haskell evaluates things on-demand. You can think of the printing done at the end as "pulling" elements from the list setAllSorted, and that might pull other things with it.
That is, running this code it goes something like this:
Printing first evaluates the first element of setAllSorted.
Since this comes from a sorting procedure, it will require all the elements of setAll to be evaluated. (Since the smallest element could be the last one).
Evaluating the first element of setAll requires evaluating the first element of setOne.
Evaluating the first element of setOne depends on how f is implemented. It might require all or none of setA and setB to be evaluated.
After we're done printing the first element of setAllSorted, setAll will have been fully evaluated. There are no more references to setOne, setTwo and the smaller sets, so all of these are by now eligible for garbage collection. The first element of setAllSorted can also be reclaimed.
So in theory, this code will keep setAll in memory most of the time, while setAllSorted, setOne and setTwo will likely only occupy a constant amount of space at any time. Depending on the implementation of f, the same may be true for the smaller sets.

Haskell's algebraic data types

I'm trying to fully understand all of Haskell's concepts.
In what ways are algebraic data types similar to generic types, e.g., in C# and Java? And how are they different? What's so algebraic about them anyway?
I'm familiar with universal algebra and its rings and fields, but I only have a vague idea of how Haskell's types work.
Haskell's algebraic data types are named such since they correspond to an initial algebra in category theory, giving us some laws, some operations and some symbols to manipulate. We may even use algebraic notation for describing regular data structures, where:
+ represents sum types (disjoint unions, e.g. Either).
• represents product types (e.g. structs or tuples)
X for the singleton type (e.g. data X a = X a)
1 for the unit type ()
and μ for the least fixed point (e.g. recursive types), usually implicit.
with some additional notation:
X² for X•X
In fact, you might say (following Brent Yorgey) that a Haskell data type is regular if it can be expressed in terms of 1, X, +, •, and a least fixed point.
With this notation, we can concisely describe many regular data structures:
Units: data () = ()
1
Options: data Maybe a = Nothing | Just a
1 + X
Lists: data [a] = [] | a : [a]
L = 1+X•L
Binary trees: data BTree a = Empty | Node a (BTree a) (BTree a)
B = 1 + X•B²
Other operations hold (taken from Brent Yorgey's paper, listed in the references):
Expansion: unfolding the fix point can be helpful for thinking about lists. L = 1 + X + X² + X³ + ... (that is, lists are either empty, or they have one element, or two elements, or three, or ...)
Composition, ◦, given types F and G, the composition F ◦ G is a type which builds “F-structures made out of G-structures” (e.g. R = X • (L ◦ R) ,where L is lists, is a rose tree.
Differentiation, the derivative of a data type D (given as D') is the type of D-structures with a single “hole”, that is, a distinguished location not containing any data. That amazingly satisfy the same rules as for differentiation in calculus:
1′ = 0
X′ = 1
(F + G)′ = F' + G′
(F • G)′ = F • G′ + F′ • G
(F ◦ G)′ = (F′ ◦ G) • G′
References:
Species and Functors and Types, Oh My!, Brent A. Yorgey, Haskell’10, September 30, 2010, Baltimore, Maryland, USA
Clowns to the left of me, jokers to the right (Dissecting Data Structures), Conor McBride POPL 2008
"Algebraic Data Types" in Haskell support full parametric polymorphism, which is the more technically correct name for generics, as a simple example the list data type:
data List a = Cons a (List a) | Nil
Is equivalent (as much as is possible, and ignoring non-strict evaluation, etc) to
class List<a> {
class Cons : List<a> {
a head;
List<a> tail;
}
class Nil : List<a> {}
}
Of course Haskell's type system allows more ... interesting use of type parameters but this is just a simple example. With regards to the "Algebraic Type" name, i've honestly never been entirely sure of the exact reason for them being named that, but have assumed that it's due the mathematical underpinnings of the type system. I believe that the reason boils down to the theoretical definition of an ADT being the "product of a set of constructors", however it's been a couple of years since i escaped university so i can no longer remember the specifics.
[Edit: Thanks to Chris Conway for pointing out my foolish error, ADT are of course sum types, the constructors providing the product/tuple of fields]
In universal algebra
an algebra consists of some sets of elements
(think of each set as the set of values of a type)
and some operations, which map elements to elements.
For example, suppose you have a type of "list elements" and a
type of "lists". As operations you have the "empty list", which is a 0-argument
function returning a "list", and a "cons" function which takes two arguments,
a "list element" and a "list", and produce a "list".
At this point there are many algebras that fit the description,
as two undesirable things may happen:
There could be elements in the "list" set which cannot be built
from the "empty list" and the "cons operation", so-called "junk".
This could be lists starting from some element that fell from the sky,
or loops without a beginning, or infinite lists.
The results of "cons" applied to different arguments could be equal,
e.g. consing an element to a non-empty list
could be equal to the empty list. This is sometimes called "confusion".
An algebra which has neither of these undesirable properties is called
initial, and this is the intended meaning of the abstract data type.
The name initial derives from the property that there is exactly
one homomorphism from the initial algebra to any given algebra.
Essentially you can evaluate the value of a list by applying the operations
in the other algebra, and the result is well-defined.
It gets more complicated for polymorphic types ...
A simple reason why they are called algebraic; there are both sum (logical disjunction) and product (logical conjunction) types. A sum type is a discriminated union, e.g:
data Bool = False | True
A product type is a type with multiple parameters:
data Pair a b = Pair a b
In O'Caml "product" is made more explicit:
type 'a 'b pair = Pair of 'a * 'b
Haskell's datatypes are called "algebraic" because of their connection to categorical initial algebras. But that way lies madness.
#olliej: ADTs are actually "sum" types. Tuples are products.
#Timbo:
You are basically right about it being sort of like an abstract Tree class with three derived classes (Empty, Leaf, and Node), but you would also need to enforce the guarantee that some one using your Tree class can never add any new derived classes, since the strategy for using the Tree datat type is to write code that switches at runtime based on the type of each element in the tree (and adding new derived types would break existing code). You can sort of imagine this getting nasty in C# or C++, but in Haskell, ML, and OCaml, this is central to the language design and syntax so coding style supports it in a much more convenient manner, via pattern matching.
ADT (sum types) are also sort of like tagged unions or variant types in C or C++.
old question, but no one's mentioned nullability, which is an important aspect of Algebraic Data Types, perhaps the most important aspect. Since each value most be one of alternatives, exhaustive case-based pattern matching is possible.
For me, the concept of Haskell's algebraic data types always looked like polymorphism in OO-languages like C#.
Look at the example from http://en.wikipedia.org/wiki/Algebraic_data_types:
data Tree = Empty
| Leaf Int
| Node Tree Tree
This could be implemented in C# as a TreeNode base class, with a derived Leaf class and a derived TreeNodeWithChildren class, and if you want even a derived EmptyNode class.
(OK I know, nobody would ever do that, but at least you could do it.)

Resources