ANTLR3 Kleene star and tree traversal - antlr3

I'm an experienced yacc/bison abuser. I'm used to building my own trees and then traversing them. So, now, switching to ANTLR3 (why 3? Because 4 doesn't support Python, that's why!)... I have the following syntax:
symbol : ID fields -> ^(NAME ID fields);
fields : (DOT ID)* -> ^(FIELD ID*);
And my tree grammar fragment is:
names: ^(NAME id=. fields) ;
fields: ^(FIELD .*) ;
The resulting tree for "let z = a.b.c" is:
(LETS (LET z (NAME a (FIELD b c))))
And the walker says:
node from line 1:10 mismatched input u'b' expecting
Various attempts to introduce + instead of * have failed in other ways. Maybe there's documentation on how exactly the tree walker works for * and + but I didn't find it (like some people and refridgerators?).

Related

How to implement a union-find (disjoint set) data structure in Coq?

I am quite new to Coq, but for my project I have to use a union-find data structure in Coq. Are there any implementations of the union-find (disjoint set) data structure in Coq?
If not, can someone provide an implementation or some ideas? It doesn't have to be very efficient. (no need to do path compression or all the fancy optimizations) I just need a data structure that can hold an arbitrary data type (or nat if it's too hard) and perform: union and find.
Thanks in advance
If all you need is a mathematical model, with no concern for actual performance, I would go for the most straightforward one: a functional map (finite partial function) in which each element optionally links to another element with which it has been merged.
If an element links to nothing, then its canonical representative is itself.
If an element links to another element, then its canonical representative is the canonical representative of that other element.
Note: in the remaining of this answer, as is standard with union-find, I will assume that elements are simply natural numbers. If you want another type of elements, simply have another map that binds all elements to unique numbers.
Then you would define a function find : UnionFind → nat → nat that returns the canonical representative of a given element, by following links as long as you can. Notice that the function would use recursion, whose termination argument is not trivial. To make it happen, I think that the easiest way is to maintain the invariant that a number only links to a lesser number (i.e. if i links to j, then i > j). Then the recursion terminates because, when following links, the current element is a decreasing natural number.
Defining the function union : UnionFind → nat → nat → UnionFind is easier: union m i j simply returns an updated map with max i' j' linking to min i' j', where i' = find m i and j' = find m j.
[Side note on performance: maintaining the invariant means that you cannot adequately choose which of a pair of partitions to merge into the other, based on their ranks; however you can still implement path compression if you want!]
As for which data structure exactly to use for the map: there are several available.
The standard library (look under the title FSets) has several implementations (FMapList, FMapPositive and so on) satisfying the interface FMapInterface.
The stdpp libray has gmap.
Again if performance is not a concern, just pick the simplest encoding or, more importantly, the one that makes your proofs the simplest. I am thinking of just a list of natural numbers.
The positions of the list are the elements in reverse order.
The values of the list are offsets, i.e. the number of positions to skip forward in order to reach the target of the link.
For an element i linking to j (i > j), the offset is i − j.
For a canonical representative, the offset is zero.
With my best pseudo-ASCII-art skills, here is a map where the links are { 6↦2, 4↦2, 3↦0, 2↦1 } and the canonical representatives are { 5, 1, 0 }:
6 5 4 3 2 1 0 element
↓ ↓ ↓ ↓ ↓ ↓ ↓
/‾‾‾‾‾‾‾‾‾↘
[ 4 ; 0 ; 2 ; 3 ; 1 ; 0 ; 0 ] map
\ \____↗↗ \_↗
\___________/
The motivation is that the invariant discussed above is then enforced structurally. Hence, there is hope that find could actually be defined by structural induction (on the structure of the list), and have termination for free.
A related paper is: Sylvain Conchon and Jean-Christophe Filliâtre. A Persistent Union-Find Data Structure. In ACM SIGPLAN Workshop on ML.
It describes the implementation of an efficient union-find data structure in ML, that is persistent from the user perspective, but uses mutation internally. What may be more interesting for you, is that they prove it correct in Coq, which implies that they have a Coq model for union-find. However, this model reflects the memory store for the imperative program that they seek to prove correct. I’m not sure how applicable it is to your problem.
Maëlan has a good answer, but for an even simpler and more inefficient disjoint set data structure, you can just use functions to nat to represent them. This avoids any termination stickiness. In essence, the preimages of any total function form disjoint sets over the domain. Another way of looking at this is as representing any disjoint set G as the curried application find_root G : nat -> nat since find_root is the essential interface that disjoint sets provide.
This is also analogous to using functions to represent Maps in Coq like in Software Foundations. https://softwarefoundations.cis.upenn.edu/lf-current/Maps.html
Require Import Arith.
Search eq_nat_decide.
(* disjoint set *)
Definition ds := nat -> nat.
Definition init_ds : ds := fun x => x.
Definition find_root (g : ds) x := g x.
Definition in_same_set (g : ds) x y :=
eq_nat_decide (g x) (g y).
Definition union (g : ds) x y : ds :=
fun z =>
if in_same_set g x z
then find_root g y
else find_root g z.
You can also make it generic over the type held in the disjoint set like so
Definition ds (a : Type) := a -> nat.
Definition find_root {a} (g : ds a) x := g x.
Definition in_same_set {a} (g : ds a) x y :=
eq_nat_decide (g x) (g y).
Definition union {a} (g : ds a) x y : ds a :=
fun z =>
if in_same_set g x z
then find_root g y
else find_root g z.
To initialize the disjoint set for a particular a, you need an Enum instance for your type a basically.
Definition init_bool_ds : ds bool := fun x => if x then 0 else 1.
You may want to trade out eq_nat_decide for eqb or some other roughly equivalent thing depending on your proof style and needs.

SML: Counting nodes

My assignment is to write a function that will compute the size of a binary tree. This is the implementation of the tree structure:
datatype 'a bin_tree =
Leaf of 'a
| Node of 'a bin_tree (* left tree *)
* int (* size of left tree *)
* int (* size of right tree *)
* 'a bin_tree (* right tree *)
I was given this template from my professor:
fun getSize Empty = 0
| getSize (Leaf _) = 1
| getSize (Node(t1,_,t2)) = getSize t1 + getSize t2;
I was wondering if I need to manipulate this to agree with my tree structure in order to get it to work?
The 'a bin_tree type memoizes the size of each sub-tree. So if you're allowed to assume that the size that is stored is correct, you can return the size of a tree without recursion.
The template given by your professor is not for this type, but for another tree type that does not memoize the size. It demonstrates how you can calculate the size for such a tree by pattern matching and recursion, both language features of which you need to also use.
So the task is for you to write an entirely different function for the 'a bin_tree type. You have to figure out what the right way to pattern match is. First off, the template for getSize does not add up: There are three cases with three constructors, Empty, Leaf x and Node (L, x, R). But the 'a bin_tree type only has two constructors, Leaf x and Node (L, sizeL, sizeR, R).
So you want to read up on how to perform pattern matching on data types.

Magic code for level binary tree traversal - what is going on?

We have a definition of binary tree:
type 'a tree =
| Node of 'a tree * 'a * 'a tree
| Null;;
And also a helpful function for traversing the tree"
let rec fold_tree f a t =
match t with
| Null -> a
| Node (l, x, r) -> f x (fold_tree f a l) (fold_tree f a r);;
And here is a "magic" function which, when given a binary tree, returns a list in which we have lists of elements on particular levels, for example, when given a tree:
(source: ernet.in)
the function returns [[1];[2;3];[4;5;6;7];[8;9]].
let levels tree =
let aux x fl fp =
fun l ->
match l with
| [] -> [x] :: (fl (fp []))
| h :: t -> (x :: h) :: (fl (fp t))
in fold_tree aux (fun x -> x) tree [];;
And apparently it works, but I can't wrap my mind around it. Could anyone explain in simple terms what is going on? Why does this function work?
How do you combine two layer lists of two subtrees and get a layer list of a bugger tree? Suppose you have this tree
a
/ \
x y
where x and y are arbitrary trees, and they have their layer lists as [[x00,x01,...],[x10,x11,...],...] and [[y00,y01,...],[y10,y11,...],...] respectively.
The layer list of the new tree will be [[a],[x00,x01,...]++[y00,y01,...],[x10,x11,...]++[y10,y11,...],...]. How does this function build it?
Let's look at this definition
let rec fold_tree f a t = ...
and see what kind of arguments we are passing to fold_tree in our definition of levels.
... in fold_tree aux (fun x -> x) tree []
So the first argument, aux, is some kind of long and complicated function. We will return to it later.
The second argument is also a function — the identity function. This means that fold_tree will also return a function, because fold_tree always returns the same type of value as its second argument. We will argue that the function fold_tree applied to this set of arguments takes a list of layers, and adds layers of a given tree to it.
The third argument is our tree.
Wait, what's the fourth argument? fold_tree is only supposed to get tree? Yes, but since it returns a function (see above), that function gets applied to that fourth argument, the empty list.
So let's return to aux. This aux function accepts three arguments. One is the element of the tree, and two others are the results of the folds of the subtrees, that is, whatever fold_tree returns. In our case, these two things are functions again.
So aux gets a tree element and two functions, and returns yet another function. Which function is that? It takes a list of layers, and adds layers of a given tree to it. How it does that? It prepends the root of the tree to the first element (which is the top layer) of the list, and then adds the layers of the right subtree to the tail of the list (which is all the layers below the top) by calling the right function on it, and then adds the layers of the left subtree to the result by calling the left function on it. Or, if the incoming list is empty, it just the layers list afresh by applying the above step to the empty list.

Syntax Directed Definition for a grammar to print the parsing string

---> Consider the grammar below:
S->SaS|bB
B->AcB| ε
A->dAd| ε
For the grammar given above, write the syntax directed definition that
prints the string that is being parsed and construct an annotated parse tree for the string ‘bddcab’.
Solution:
Now rewriting above grammar we have:
S->S1aS2
S->bB
B->AcB1
B-> ε
A->dA1d
A-> ε
( The numbers 1 and 2 following the non-terminal actually denote subscripts. And the subscripts in above grammar denote instances of the non-terminal.)
The above grammar along with the semantic rules.
Productions Semantic Rules
S->S1aS2 S.val=S1.val+a.lexval + S2.val { print S.val }
S->bB S.val=b.lexval + B.val { Print S.val}
B->AcB1 B.val=A.val+c.lexval + B1.val
B-> ε
A->dA1d A.val=d.lexval + A1.val + d.lexval
A-> ε
** The '+' operator is merely for concatenation.
Is this solution alright? I have a feeling that it might not be accurate.
Here's the annotated parse tree.
I think that those print actions in the S rules will backfire because S can occur multiple times.
S can generate SaS. But each of those S's can also generate SaS.
Basically, if you're building the printed representation as a semantic property, you can only do the print outside of the grammar once it is fully evaluated, ensuring that it happens only once.
This could be shown by introducing a pseudo start symbol X. S is reduced to X just once, and so the print happens just once, pulling the final val from the top-level S.
X -> S { print S.val } // print the top-level S's val, just once.
The other approach would be to have truly syntax-directed printing, whereby the side effect of printing happens as the parsing reductions takes place. E.g. Yacc-like embedded rule among the right hand symbols:
S -> S1 a { print a.lexeme } S2 { /* other semantic rules go here */ }
In every rule that recognizes a terminal, print the terminal as soon as it is recognized. So here, we know that the reduction of S1 causes all of its terminals to be printed (by similar rules all over the grammar). Then we recognize an a and print it, and then S2 is recognized and reduced, causing all of its terminals to be printed. You may recognize that this is closely analogous to an inorder traversal of a tree.

Haskell's algebraic data types

I'm trying to fully understand all of Haskell's concepts.
In what ways are algebraic data types similar to generic types, e.g., in C# and Java? And how are they different? What's so algebraic about them anyway?
I'm familiar with universal algebra and its rings and fields, but I only have a vague idea of how Haskell's types work.
Haskell's algebraic data types are named such since they correspond to an initial algebra in category theory, giving us some laws, some operations and some symbols to manipulate. We may even use algebraic notation for describing regular data structures, where:
+ represents sum types (disjoint unions, e.g. Either).
• represents product types (e.g. structs or tuples)
X for the singleton type (e.g. data X a = X a)
1 for the unit type ()
and μ for the least fixed point (e.g. recursive types), usually implicit.
with some additional notation:
X² for X•X
In fact, you might say (following Brent Yorgey) that a Haskell data type is regular if it can be expressed in terms of 1, X, +, •, and a least fixed point.
With this notation, we can concisely describe many regular data structures:
Units: data () = ()
1
Options: data Maybe a = Nothing | Just a
1 + X
Lists: data [a] = [] | a : [a]
L = 1+X•L
Binary trees: data BTree a = Empty | Node a (BTree a) (BTree a)
B = 1 + X•B²
Other operations hold (taken from Brent Yorgey's paper, listed in the references):
Expansion: unfolding the fix point can be helpful for thinking about lists. L = 1 + X + X² + X³ + ... (that is, lists are either empty, or they have one element, or two elements, or three, or ...)
Composition, ◦, given types F and G, the composition F ◦ G is a type which builds “F-structures made out of G-structures” (e.g. R = X • (L ◦ R) ,where L is lists, is a rose tree.
Differentiation, the derivative of a data type D (given as D') is the type of D-structures with a single “hole”, that is, a distinguished location not containing any data. That amazingly satisfy the same rules as for differentiation in calculus:
1′ = 0
X′ = 1
(F + G)′ = F' + G′
(F • G)′ = F • G′ + F′ • G
(F ◦ G)′ = (F′ ◦ G) • G′
References:
Species and Functors and Types, Oh My!, Brent A. Yorgey, Haskell’10, September 30, 2010, Baltimore, Maryland, USA
Clowns to the left of me, jokers to the right (Dissecting Data Structures), Conor McBride POPL 2008
"Algebraic Data Types" in Haskell support full parametric polymorphism, which is the more technically correct name for generics, as a simple example the list data type:
data List a = Cons a (List a) | Nil
Is equivalent (as much as is possible, and ignoring non-strict evaluation, etc) to
class List<a> {
class Cons : List<a> {
a head;
List<a> tail;
}
class Nil : List<a> {}
}
Of course Haskell's type system allows more ... interesting use of type parameters but this is just a simple example. With regards to the "Algebraic Type" name, i've honestly never been entirely sure of the exact reason for them being named that, but have assumed that it's due the mathematical underpinnings of the type system. I believe that the reason boils down to the theoretical definition of an ADT being the "product of a set of constructors", however it's been a couple of years since i escaped university so i can no longer remember the specifics.
[Edit: Thanks to Chris Conway for pointing out my foolish error, ADT are of course sum types, the constructors providing the product/tuple of fields]
In universal algebra
an algebra consists of some sets of elements
(think of each set as the set of values of a type)
and some operations, which map elements to elements.
For example, suppose you have a type of "list elements" and a
type of "lists". As operations you have the "empty list", which is a 0-argument
function returning a "list", and a "cons" function which takes two arguments,
a "list element" and a "list", and produce a "list".
At this point there are many algebras that fit the description,
as two undesirable things may happen:
There could be elements in the "list" set which cannot be built
from the "empty list" and the "cons operation", so-called "junk".
This could be lists starting from some element that fell from the sky,
or loops without a beginning, or infinite lists.
The results of "cons" applied to different arguments could be equal,
e.g. consing an element to a non-empty list
could be equal to the empty list. This is sometimes called "confusion".
An algebra which has neither of these undesirable properties is called
initial, and this is the intended meaning of the abstract data type.
The name initial derives from the property that there is exactly
one homomorphism from the initial algebra to any given algebra.
Essentially you can evaluate the value of a list by applying the operations
in the other algebra, and the result is well-defined.
It gets more complicated for polymorphic types ...
A simple reason why they are called algebraic; there are both sum (logical disjunction) and product (logical conjunction) types. A sum type is a discriminated union, e.g:
data Bool = False | True
A product type is a type with multiple parameters:
data Pair a b = Pair a b
In O'Caml "product" is made more explicit:
type 'a 'b pair = Pair of 'a * 'b
Haskell's datatypes are called "algebraic" because of their connection to categorical initial algebras. But that way lies madness.
#olliej: ADTs are actually "sum" types. Tuples are products.
#Timbo:
You are basically right about it being sort of like an abstract Tree class with three derived classes (Empty, Leaf, and Node), but you would also need to enforce the guarantee that some one using your Tree class can never add any new derived classes, since the strategy for using the Tree datat type is to write code that switches at runtime based on the type of each element in the tree (and adding new derived types would break existing code). You can sort of imagine this getting nasty in C# or C++, but in Haskell, ML, and OCaml, this is central to the language design and syntax so coding style supports it in a much more convenient manner, via pattern matching.
ADT (sum types) are also sort of like tagged unions or variant types in C or C++.
old question, but no one's mentioned nullability, which is an important aspect of Algebraic Data Types, perhaps the most important aspect. Since each value most be one of alternatives, exhaustive case-based pattern matching is possible.
For me, the concept of Haskell's algebraic data types always looked like polymorphism in OO-languages like C#.
Look at the example from http://en.wikipedia.org/wiki/Algebraic_data_types:
data Tree = Empty
| Leaf Int
| Node Tree Tree
This could be implemented in C# as a TreeNode base class, with a derived Leaf class and a derived TreeNodeWithChildren class, and if you want even a derived EmptyNode class.
(OK I know, nobody would ever do that, but at least you could do it.)

Resources