Implications as functions in Coq? - logic

I read that implications are functions. But I have a hard time trying to understand the example given in the above mentioned page:
The proof term for an implication P → Q is a function that takes
evidence for P as input and produces evidence for Q as its output.
Lemma silly_implication : (1 + 1) = 2 → 0 × 3 = 0. Proof. intros H.
reflexivity. Qed.
We can see that the proof term for the above lemma is indeed a
function:
Print silly_implication. (* ===> silly_implication = fun _ : 1 + 1 = 2
=> eq_refl
: 1 + 1 = 2 -> 0 * 3 = 0 *)
Indeed, it's a function. But its type does not look right to me. From my reading, the proof term for P -> Q should be a function with an evidence for Q as output. Then, the output of (1+1) = 2 -> 0*3 = 0 should be an evidence for 0*3 = 0, alone, right?
But the Coq print out above shows that the function image is eq_refl : 1 + 1 = 2 -> 0 * 3 = 0, instead of eq_refl: 0 * 3 = 0. I don't understand why the hypothesis 1 + 1 = 2 should appear in the output. Can anyone help explain what is going on here?
Thanks.

Your understanding is correct until:
But the Coq print out above shows that the function image is ...
I think you misunderstand the Print command. Print shows you the term associated with a definition, along with the type of the definition. It does not show the image/output of a function.
For example, the following prints the definition and type of the value x:
Definition x := 5.
Print x.
> x = 5
> : nat
Similarly, the following prints the definition and type of the function f:
Definition f := fun n => n + 2.
Print f.
> f = fun n : nat => n + 2
> : nat -> nat
If you want to see the function's codomain, you have to apply the function to a value, like so:
Definition fx := f x.
Print fx.
> fx = f x
> : nat
If you want to see the image/output of a function Print won't help you. What you need is Compute. Compute takes a term (e.g. a function application) and reduces it as far as possible:
Compute (f x).
> = 7
> : nat

Related

Allow outer variable inside Julia's sort comparison

The issue:
I want to be able to sort with a custom function that depends on an outer defined variable, for example:
k = 2
sort([1,2,3], lt=(x,y) -> x + k > y)
This works all dandy and such because k is defined in the global scope.
That's where my issue lays, as I want to do something akin to:
function
k = 2
comp = (x,y) -> x + k > y
sort([1,3,3], lt=comp)
end
Which works, but feels like a hack, because my comparison function is way bigger and it feels really off to have to have it defined there in the body of the function.
For instance, this wouldn't work:
comp = (x,y) -> x + k > y # or function comp(x,y) ... end
function
k = 2
sort([1,3,3], lt=comp)
end
So I'm just wondering if there's any way to capture variables such as k like you'd be able to do with lambda functions in C++.
Is this what you want?
julia> comp(k) = (x,y) -> x + k > y
comp (generic function with 1 method)
julia> sort([1,3,2,3,2,2,2,3], lt=comp(2))
8-element Vector{Int64}:
3
2
2
2
3
2
3
1

How in coq to use lemma a=b backwards?

Suppose I have a lemma L that says
forall x, x + 1 + 1 = x + 2.
If my goal is of the form a + 1 + 1 = b
I can write a command rewrite L to get a goal of the form a + 2 = b
However, if my goal is of the form a + 2 = b
how to apply the lemma backwards to get a goal a + 1 + 1 = b?
Say
rewrite <- L. (* Rewrite right to left *)
For symmetry, there's also rewrite -> L, which is the same as rewrite L (rewrite left to right).
This is documented in Coq's tactic reference.

Why does a strict length function perform noticeably faster?

I toyed around with definitions to better understand the evaluation model, and wrote two for the length of a list.
The naive definition:
len :: [a] -> Int
len [] = 0
len (_:xs) = 1 + len xs
The strict (and tail-recursive) definition:
slen :: [a] -> Int -> Int
slen [] n = n
slen (_:xs) !n = slen xs (n+1)
len [1..10000000] takes about 5-6 seconds to perform.
slen [1..10000000] 0 takes about 3-4 seconds to perform.
I'm curious why. Before I checked the performances I was positive that they would perform about the same because len should have only one more thunk to evaluate at most. For demonstration purposes:
len [a,b,c,d]
= 1 + len [b,c,d]
= 1 + 1 + len [c,d]
= 1 + 1 + 1 + len [d]
= 1 + 1 + 1 + 1 + len []
= 1 + 1 + 1 + 1 + 0
= 4
And
slen [a,b,c,d] 0
= slen [b,c,d] 1
= slen [c,d] 2
= slen [d] 3
= slen [] 4
= 4
What makes slen noticeably faster?
P.S. I also wrote a tail-recursive lazy function (just like slen but lazy) as an attempt to close-in on the reason -- maybe it's because it was tail-recursive -- but it performed about the same as the naive definition.
The final step of len is not O(1). It is O(n) to add together n numbers. len also uses O(n) memory while slen uses O(1) memory.
The reason it uses O(n) memory is that each thunk uses up some memory. So when you have something like this:
1 + 1 + 1 + 1 + len []
there are five unevaluated thunks (including len [])
In GHCi, we can examine this thunk behavior a little easier with the :sprint command. The :sprint command prints the given value without forcing the evaluating of any thunks (you can learn more from :help). I'll use conses ((:)) since we can more easily evaluate each thunk one at a time, but the principle is the same.
λ> let ys = map id $ 1 : 2 : 3 : [] :: [Int] -- map id prevents GHCi from being too eager here
λ> :sprint ys
ys = _
λ> take 1 ys
[1]
λ> :sprint ys
ys = 1 : _
λ> take 2 ys
[1,2]
λ> :sprint ys
ys = 1 : 2 : _
λ> take 3 ys
[1,2,3]
λ> :sprint ys
ys = 1 : 2 : 3 : _
λ> take 4 ys
[1,2,3]
λ> :sprint ys
ys = [1,2,3]
Unevaluated thunks are represented by _ and you can see that in the original ys there are 4 thunks nested inside of each other, one for each part of the list (including []).
There isn't a good way that I know of to see this in Int because its evaluation is more all or nothing, but it still builds up a nested thunk in the same way. If you could see it like this, its evaluation would look something like this:
len [a,b,c,d]
= 1 + len [b,c,d]
= 1 + 1 + len [c,d]
= 1 + 1 + 1 + len [d]
= 1 + 1 + 1 + 1 + len []
= 1 + 1 + 1 + 1 + 0
= 1 + 1 + 1 + 1 -- Here it stops building the thunks and starts evaluating them
= 1 + 1 + 2
= 1 + 3
= 4
David Young's answer gives the correct explanation of the difference in evaluation order. You should think about Haskell evaluation in the way he outlines.
Let me show you how you can see the difference in the Core. I think it's actually more visible with optimizations on, because the evaluation ends up as an explicit case statement. If you've never played with Core before, see the canonical SO question on the topic: Reading GHC Core.
Generate the core output with ghc -O2 -ddump-simpl -dsuppress-all -ddump-to-file SO27392665.hs. You'll see that GHC splits both len and slen into a recursive "worker" function, $wlen or $wslen, and a nonrecursive "wrapper" function. Because the vast majority of the time is spent in the recursive "workers," focus on them:
Rec {
$wlen
$wlen =
\ # a_arZ w_sOR ->
case w_sOR of _ {
[] -> 0;
: ds_dNU xs_as0 ->
case $wlen xs_as0 of ww_sOU { __DEFAULT -> +# 1 ww_sOU }
}
end Rec }
len
len =
\ # a_arZ w_sOR ->
case $wlen w_sOR of ww_sOU { __DEFAULT -> I# ww_sOU }
Rec {
$wslen
$wslen =
\ # a_arR w_sOW ww_sP0 ->
case w_sOW of _ {
[] -> ww_sP0;
: ds_dNS xs_asW -> $wslen xs_asW (+# ww_sP0 1)
}
end Rec }
slen
slen =
\ # a_arR w_sOW w1_sOX ->
case w1_sOX of _ { I# ww1_sP0 ->
case $wslen w_sOW ww1_sP0 of ww2_sP4 { __DEFAULT -> I# ww2_sP4 }
}
You can see that $wslen has only one case, while $wlen has two. If you go look at David's answer, you can trace what happens in $wlen: it does its case analysis on the outermost list constructor ([]/:), then makes the recursive call to $wlen xs_as0 (i.e. len xs), which it also cases, i.e. forces the accumulated thunk.
In $wslen, on the other hand, there's only the one case statement. In the recursive branch, there's simply an unboxed addition, (+# ww_sP0 1), which doesn't create a thunk.
(Note: a previous version of this answer had stated that with -O GHC could specialize $wslen but not $wlen to use unboxed Int#s. That's not the case.)

Ocaml function call, calling itself recursively

Here is what I am trying to do, given z with the signature that 'a -> 'a,
let z(a)=
if(a=0) then
0
else
a * a;;
if I were to call repeat as, repeat(2, f, 2);;
then the answer should be too, since f should be called twice with 2, as in f(f(2) the answer should be 16.
I think the problem might be that when you are defining a recursive function you need to tell OCaml using the rec keyword.
Try changing the code to:
let f a =
if a = 0 then
0
else
a * a
let rec repeathelper n f answer accum =
if n = accum then
answer
else
repeathelper n f (f answer) (accum+1)
let repeat n f x = repeathelper n (f 0) 0 0

Understanding the runtime of a recursive SML function involving list appending (using #)

I'm new to algorithm analysis and SML and got hung up on the average-case runtime of the following SML function. I would appreciate some feedback on my thinking.
fun app([]) = []
| app(h::t) = [h] # app(t)
So after every recursion we will end up with a bunch of single element lists (and one no-element list).
[1]#[2]#[3]#...#[n]#[]
Where n is the number of elements in the original list and 1, 2, 3, ..., n is just to illustrate what elements in the original list we are talking about. L # R takes time linear in the length of list L. Assuming A is the constant amount of time # takes for every element, I imagine this as if:
[1,2]#[3]#[4]#...#[n]#[] took 1A
[1,2,3]#[4]#...#[n]#[] took 2A
[1,2,3,4]#...#[n]#[] took 3A
...
[1,2,3,4,...,n]#[] took (n-1)A
[1,2,3,4,...,n] took nA
I'm therefore thinking that a recurrence for the time would look something like this:
T(0) = C (if n = 0)
T(n) = T(n-1) + An + B (if n > 0)
Where C is just the final matching of the base case app([]) and B is the constant for h::t. Close the recurrence and we will get this (proof omitted):
T(n) = (n²+n)A/2 + Bn + C = (A/2)n² + (A/2)n + Bn + C = Θ(n²)
This is my own conclusion which differs from the answer that was presented to me, namely:
T(0) = B (if n = 0)
T(n) = T(n-1) + A (if n > 0)
Closed form
T(n) = An + B = Θ(n)
Which is quite different. (Θ(n) vs Θ(n²)!) But isn't this assuming that L # R takes constant time rather than linear? For example, it would be true for addition
fun add([]) = 0
| add(h::t) = h + add(t) (* n + ... + 2 + 1 + 0 *)
or even concatenation
fun con([]) = []
| con(h::t) = h::con(t) (* n :: ... :: 2 :: 1 :: [] *)
Am I misunderstanding the way that L # R exists or is my analysis (at least sort of) correct?
Yes. Running the app [1,2,3] command by hand one function call at a time gives:
app [1,2,3]
[1]#(app [2,3])
[1]#([2]#(app [3]))
[1]#([2]#([3]#(app [])))
[1]#([2]#([3]#([])))
[1]#([2]#[3])
[1]#([2,3])
[1,2,3]
This is a consequence of the function call being on the left-side of the #.
Compare this to a naïve version of rev:
fun rev [] = []
| rev (x::xs) = rev xs # [x]
This one has the running time you expect: Once the recursion has fully expanded into an expression ((([])#[3])#[2])#[1] (taking linear time), it requires n + (n - 1) + (n - 2) + ... + 1, or n(n+1)/2, or O(n^2) steps to complete the computation. A more effective rev could look like this:
local
fun rev' [] ys = ys
| rev' (x::xs) ys = rev' xs (x::ys)
in
fun rev xs = rev' xs []
end

Resources