How do you write the typing reference for this using rules from simply typed lambda calculus? - lambda-calculus

I have to give the typing derivation for this particular statement
• ` λf : unit → (unit × unit).fst (f ()) : (unit → (unit × unit)) → unit.
I am super new to this and don't understand why it will just lead to unit at the end.
Shouldn't it be
unit -> unit x unit -> unit -> unit x unit.

Related

Why does QuickCheck take a long time when testing a Functor instance with a specific type signature?

I'm working through the wonderful Haskell Book. While solving some exercises I ran QuickCheck test that took a relatively long time to run and I can't figure out why.
The exercise I am solving is in Chapter 16 - I need to write a Functor instance for
data Parappa f g a =
DaWrappa (f a) (g a)
Here is a link to the full code of my solution. The part I think is relevant is this:
functorCompose' :: (Eq (f c), Functor f)
=> Fun a b -> Fun b c -> f a -> Bool
functorCompose' fab gbc x =
(fmap g (fmap f x)) == (fmap (g . f) x)
where f = applyFun fab
g = applyFun gbc
type ParappaComp =
Fun Integer String
-> Fun String [Bool]
-> Parappa [] Maybe Integer
-- -> Parappa (Either Char) Maybe Integer
-> Bool
main :: IO ()
main = do
quickCheck (functorCompose' :: ParappaComp)
When I run this in the REPL it takes ~6 seconds to complete. If I change ParappaComp to use Either Char instead of [] (see comment in code), it finishes instantaneously like I'm used to seeing in all other exercises.
I suspect that maybe QuickCheck is using very long lists causing the test to take a long time, but I am not familiar enough with the environment to debug this or to test this hypothesis.
Why does this take so long?
How should I go about debugging this?
I suspect that maybe QuickCheck is using very long lists causing the test to take a long time, but I am not familiar enough with the environment to debug this or to test this hypothesis.
I'm not sure of the actual cause either, but one way to start debugging this is to use the collect function from QuickCheck to collect statistics about test cases. To start, you can collect the size of the result.
A simple way to obtain a size is by using the length function, requiring the functor f to be Foldable
You will need to implement or derive Foldable for Parappa (add {-# LANGUAGE DeriveFoldable #-} at the top of the file, add deriving Foldable to Parappa)
To use collect, you need to generalize Bool to Property (in the signature of functorCompose' and in the type synonym ParappaComp)
functorCompose' :: (Eq (f c), Functor f, Foldable f)
=> Fun a b -> Fun b c -> f a -> Property
functorCompose' fab gbc x =
collect (length x) $
(fmap g (fmap f x)) == (fmap (g . f) x)
where f = applyFun fab
g = applyFun gbc
With that you can see that the distribution of the lengths of generated lists is clustered around 20, with a long tail up to 100. That alone doesn't seem to explain the slowness, as one would expect that traversing lists of that size should be virtually instantaneous.

What are the rules for custom syntax declarations in Agda?

The Agda docs don't really have much to say on syntax declarations, and a cursory glance at the source code is less than illuminating, so I've been trying to piece it together myself using examples from the standard library like Σ[_]_ and ∃[_]_. I can reproduce an (admittedly rather contrived) example like theirs fairly easily
twice : {A : Set} → (A → A) → A → A
twice f x = f (f x)
syntax twice (λ x → body) = twice[ x ] body
But when I try to define custom syntax that binds two variables, I get an error
swap : {A B C : Set} → (A → B → C) → B → A → C
swap f y x = f x y
syntax swap (λ x y → body) = swap[ x , y ] body
Specifically,
Parse error
y<ERROR>
→ body) = swap[ x , y ] body
...
So I assume there are some rules as to what's allowed on the left-hand side of a syntax declaration. What are these rules, and what of them prohibits my two-variable lambda form above?
Currently Agda does not allow syntax declarations with multi-argument lambda abstractions. This is a known limitation, see the issue tracker for the corresponding enhancement request.

Type inference fails horribly when omitting argument label on a function call

Given the following function
let get_or ~default =
function | Some a -> a
| None -> default
If this function is called with the argument labeled, it works as expected:
let works = get_or ~default:4 (Some 2)
But if the label is omitted it somehow fails:
let fails = get_or 4 (Some 2)
It gets weirder, however, the error message given here by the compiler is:
This expression has type int but an expression was expected of type ('a -> 'b) option
Not only has the compiler incorrectly inferred it to be an option, but for some reason it also pulls a function type out of the proverbial magician's hat! So naturally I wonder: where on earth does that come from? And somewhat less immediately important to my curiosity, why doesn't omitting the label work in this specific case?
See this reason playground for an interactive example.
Credit for this puzzle goes to #nachotoday on the Reason Discord.
This is a case of destructive interference between labelled arguments, currying and first class functions. The function get_or has for type
val get_or: default:'a -> 'a option -> 'a
The rule with labelled arguments is that labels can be omitted when the application is total. At first glance, this means that if one apply get_or to two arguments, then it is a total application. But the fact that the return type of get_or is polymorphic (aka 'a) spells trouble. Consider for instance:
let id x = x
let x : _ -> _ = get_or (Some id) id ~default:id
This is valid code, where get_or was applied to three arguments and where the default argument was provided at the third position!
Going even further, as surprising at it might see, this is still valid:
let y : default:_ -> _ -> - = get_or (Some id) id id
And it yields a quite complicated type for y.
The generic rule here is that if the return type of a function is polymorphic then the type-checker can never know if a function application is total or not; and thus labels can never be ommited.
Going back to your example, this means that the type-checker reads
get_or (4) (Some 2)
as
First, get_or has for type default:'a -> 'a option -> 'a.
The default label has not been provided yet,
so the result will have for type default:'r -> 'r
Looking at r, in get_or 2 (Some 4), get_or has for type
'a option -> 'a, thus get_or x:'a
Then get_or x:'a is applied to y; thus 'a = 'b -> 'c
In other words, I should have x: ('b -> ' c) option
but I know that x:int.
Which leads to the contradiction reported by the type-checker, 2 was expected to be a function option ('a -> 'b) option, but was obviously a int.

Understanding the syntax of Agda

Using the following as an example
postulate DNE : {A : Set} → ¬ (¬ A) → A
data ∨ (A B : Set) : Set where
inl : A → A ∨ B
inr : B → A ∨ B
-- Use double negation to prove exclude middle
classical-2 : {A : Set} → A ∨ ¬ A
classical-2 = λ {A} → DNE (λ z → z (inr (λ x → z (inl x)))
I know this is correct, purely because of how agda works, but I am new to this language and can't get my head around how its syntax works, I would appreciate if anyone can walk me through what is going on, thanks :)
I have experience in haskell, although that was around a year ago.
Let's start with the postulate. The syntax is simply:
postulate name : type
This asserts that there exists some value of type type called name. Think of it as axioms in logic - things that are defined to be true and are not be questioned (by Agda, in this case).
Next up is the data definition. There's a slight oversight with the mixfix declaration so I'll fix it and explain what it does. The first line:
data _∨_ (A B : Set) : Set where
Introduces a new type (constructor) called _∨_. _∨_ takes two arguments of type Set and then returns a Set.
I'll compare it with Haskell. The A and B are more or less equivalent to a and b in the following example:
data Or a b = Inl a | Inr b
This means that the data definition defines a polymorphic type (a template or a generic, if you will). Set is the Agda equivalent of Haskell's *.
What's up with the underscores? Agda allows you to define arbitrary operators (prefix, postfix, infix... usually just called by a single name - mixfix). The underscores just tell Agda where the arguments are. This is best seen with prefix/postfix operators:
-_ : Integer → Integer -- unary minus
- n = 0 - n
_++ : Integer → Integer -- postfix increment
x ++ = x + 1
You can even create crazy operators such as:
if_then_else_ : ...
Next part is definition of the data constructors itself. If you've seen Haskell's GADTs, this is more or less the same thing. If you haven't:
When you define a constructor in Haskell, say Inr above, you just specify the type of the arguments and Haskell figures out the type of the whole thing, that is Inr :: b -> Or a b. When you write GADTs or define data types in Agda, you need to specify the whole type (and there are good reasons for this, but I won't get into that now).
So, the data definition specifies two constructors, inl of type A → A ∨ B and inr of type B → A ∨ B.
Now comes the fun part: first line of classical-2 is a simple type declaration. What's up with the Set thing? When you write polymorphic functions in Haskell, you just use lower case letters to represent type variables, say:
id :: a -> a
What you really mean is:
id :: forall a. a -> a
And what you really mean is:
id :: forall (a :: *). a -> a
I.e. it's not just any kind of a, but that a is a type. Agda makes you do this extra step and declare this quantification explicitly (that's because you can quantify over more things than just types).
And the curly braces? Let me use the Haskell example above again. When you use the id function somewhere, say id 5, you don't need to specify that a = Integer.
If you used normal paretheses, you'd have to provide the actual type A everytime you called classical-2. However, most of the time, the type can be deduced from the context (much like the id 5 example above), so for those cases, you can "hide" the argument. Agda then tries to fill that in automatically - and if it cannot, it complains.
And for the last line: λ x → y is the Agda way of saying \x -> y. That should explain most of the line, the only thing that remains are the curly braces yet again. I'm fairly sure that you can omit them here, but anyways: hidden arguments do what they say - they hide. So when you define a function from {A} to B, you just provide something of type B (because {A} is hidden). In some cases, you need to know the value of the hidden argument and that's what this special kind of lambda does: λ {A} → allows you to access the hidden A!

Does Agda treat records and datatypes differently for the purposes of termination-checking?

Here is an example of some Agda (2.4.2) code defining games and a binary operation on games.
module MWE where
open import Data.Sum
open import Size
data Game (i : Size) : Set₁ where
game : {Move : Set} → {<i : Size< i} → (play : Move → Game <i) → Game i
_∧_ : ∀ {i j} → Game i → Game j → Game ∞
_∧_ {i} {j} (game {Move/g} {<i} play/g) (game {Move/h} {<j} play/h)
= game {∞} {Move/g ⊎ Move/h}
λ { (inj₁ x) → _∧_ {<i} {j} (play/g x) (game play/h)
; (inj₂ y) → _∧_ {i} {<j} (game play/g) (play/h y) }
This code type- and termination-checks. However, if I replace the definition of Game with a record definition, like so:
record Game (i : Size) : Set₁ where
inductive
constructor game
field
{Move} : Set
{<i} : Size< i
play : Move → Game <i
Agda no longer believes the definition of _∧_ to be terminating, even though the value of either i or j decreases in each recursive call. As far as I am aware, these two definitions of Game should be equivalent; what causes Agda to successfully termination-check the former, but not the latter?

Resources