Proving equivalence of well-founded recursion - termination

In answer to this question Assisting Agda's termination checker the recursion is proven to be well-founded.
Given the function defined like so (and everything else like in Vitus's answer there):
f : ℕ → ℕ
f n = go _ (<-wf n)
where
go : ∀ n → Acc n → ℕ
go zero _ = 0
go (suc n) (acc a) = go ⌊ n /2⌋ (a _ (s≤s (/2-less _)))
I cannot see how to prove f n == f ⌊ n /2⌋. (My actual problem has a different function, but the problem seems to boil down to the same thing)
My understanding is that go gets Acc n computed in different ways. I suppose, f n can be shown to pass Acc ⌊ n /2⌋ computed by a _ (s≤s (/2-less _)), and f ⌊ n /2⌋ passes Acc ⌊ n /2⌋ computed by <-wf ⌊ n /2⌋, so they cannot be seen identical.
It seems to me proof-irrelevance must be used somehow, to say that it's enough to just have an instance of Acc n, no matter how computed - but any way I try to use it, it seems to contaminate everything with restrictions (eg pattern matching doesn't work, or irrelevant function cannot be applied, etc).

Related

How in Coq to use mod arithmetic (specifically Zplus_mod theorem) for natural numbers?

I want to apply the library theorem:
Theorem Zplus_mod: forall a b n, (a + b) mod n = (a mod n + b mod n) mod n.
where a b n are expected to have the type Z.
I have a subexpression (a + b) mod 3 in my goal, with a b : nat.
rewrite Zplus_mod gives an error Found no subterm matching
rewrite Zplus_mod with (a := a) gives an error "a" has type "nat" while it is expected to have type "Z".
Since natural numbers are also integers, how to use Zplus_mod theorem for nat arguments?
You can't apply this theorem, because the notation mod refers to a function on natural numbers Nat.modulo in a context where you are using natural numbers, while the notation mod refers to Z.modulo when you are referring to integers of type Z.
Using the Search command ou can search specifically for theorems about Nat.modulo and (_ + _)%nat and you will see that some existing theorems are actually close to your needs (Nat.add_mod_idemp_l and Nat.add_mod_idemp_r).
You can also look for a theorem that links Z.modulo and Nat.modulo. This gives mod_Zmod. But this forces you to work in the type of integers:
Require Import Arith ZArith.
Search Z.modulo Nat.modulo.
(* The answer is :
mod_Zmod: forall n m, m <> 0 -> Z.of_nat (n mod m) =
(Z.of_nat n mod Z.of_nat m)%Z *)
One way out is to find a theorem that tells you that the function Z.of_nat is injective. I found it by typing the following command.
Search Z.of_nat "inj".
In the long list that was produced, the relevant theorem is Nat2Z.inj, you
then need to show how Z.of_nat interacts with all of the operators involved. Most of these theorems require n to be non-zero, so I add this as a condition. Here is the example.
Lemma example (a b n : nat) :
n <> 0 -> (a + b) mod n = (a mod n + b mod n) mod n.
Proof.
intro nn0.
apply Nat2Z.inj.
rewrite !mod_Zmod; auto.
rewrite !Nat2Z.inj_add.
rewrite !mod_Zmod; auto.
rewrite Zplus_mod.
easy.
Qed.
This answers your question, but frankly, I believe you would be better off using the lemmas Nat.add_mod_idemp_l and Nat.add_mod_idemp_r.

Which one of the following is better?

So I have two implementations of the function tabulate, which, given a function f :: Int -> a and a number n, should produce the list [f 0, f 1, ..., f (n-1)]. I'm trying to guess which one is better in terms of work and span.
tabulate1 :: (Int -> a) -> Int -> [a]
tabulate1 f n = tab (\x -> f (n - x)) n where
tab _ 0 = []
tab g n = let (x,xs) = (g n) ||| (tab g (n-1))
in (x:xs)
tabulate2 :: (Int -> a) -> Int -> [a]
tabulate2 f n = tab f 0 (n-1) where
tab f n m
| n > m = []
| n == m = [f n]
| otherwise = let i = (n + m) `div` 2
(l, r) = (tab f n i) ||| (tab f i+1 m)
in (l ++ r)
While the first one avoids the using of (++), which has linear work and span, the second one computes the two sublists in parallel but uses (++).
So... which one is better?
Time and space complexity in Haskell is often non-trivial as it is a lazy language. This means that while a function might be O(n!), its result might never be needed and therefore never evaluated. Or like in this case, if your function returns a list, and only the first 3 elements are needed by other functions, only those are evaluated.
Anyways, your functions is just a particular case of map, and as such it could be coded in a much more readable way:
tabulate f n = map f [0..n]
Map is implemented with a fold, and is probably the most optimised version you could get

Isabelle confused by a previous lemma?

I found something strange while doing exercise 2.5 of the Concrete Semantics book. Basically, we have to prove the famous Gauss formula for the sum n integers. Here is my code:
fun sum_upto :: "nat ⇒ nat" where
"sum_upto 0 = 0" |
"sum_upto (Suc n) = (Suc n) + (sum_upto n)"
lemma gauss: "sum_upto n = (n * (n + 1)) div 2"
apply(induction n)
apply(auto)
done
This won't proceed at least I remove a previous lemma from exercise 2.3:
fun double :: "nat ⇒ nat" where
"double 0 = 0" |
"double (Suc n) = add (double n) 2"
lemma double_succ [simp]: "Suc (Suc (m)) = add m 2"
apply(induction m)
apply(auto)
done
lemma double_add: "double m = add m m"
apply(induction m)
apply(auto)
done
So here add is a user-defined function for addition. I've looked other solutions and my definition of double is substituted by Suc(Suc (double n)) in that case, the next lemma is unnecessary and the error in Gauss disappears.
However, I'm interested to know why is this happening because in principle both problems don't use any common structure.
The [simp] is at fault here. Try removing it and see if that helps.
The underlying reason is that [simp] will add a rule to the so-called simpset and will be used by the simp, auto, etc. methods automatically. Great care needs to be taken before lemmas are declared as [simp] because of that.
In your case, the problem is that Suc (Suc m) = add m 2 is not a good rule for the simpset. It will replace all instances of Suc (Suc m) with add m 2, which is probably the opposite direction of what you want.

Asymptotic analysis of agda functions inside agda

Is it possible to asymptotically analyze agda functions' runtime or memory inside agda itself? I'm trying to come up with something like this. Suppose I have this agda function:
data L : Nat → Set where
[] : L zero
list : ∀ {n} → Nat → L n → L (suc n)
f : ∀ {n} → L n → Nat
f [] = 0
f (list x xs) = f xs
I want to prove a theorem in agda that'd ultimately mean something like f ∈ O[n]. However, this is rather hard since I now need to prove something about the implementation of f rather than its type. I tried using some reflection and metaprogramming, but without much success. I guess the algorithm I have in my mind is something like getting all terms of f one by one like lisp.
The biggest trouble is, obviously, O[n] is not well defined, I need to be able to construct this class from n of L n. Then, I need f as f : L n -> Nat so that thm : ∀ {n} -> (f : L n -> Nat) -> f ∈ (O n). But then f is now not bound to the f we're interest in. Instead, it's any function f from a list to a natural. Therefore, this is clearly false, so it cannot be proven.
Is there a way to prove something like this?

Provably correct permutation in less than O(n^2)

Written in Haskell, here is the data type that proves that one list is a permutation of another:
data Belongs (x :: k) (ys :: [k]) (zs :: [k]) where
BelongsHere :: Belongs x xs (x ': xs)
BelongsThere :: Belongs x xs xys -> Belongs x (y ': xs) (y ': xys)
data Permutation (xs :: [k]) (ys :: [k]) where
PermutationEmpty :: Permutation '[] '[]
PermutationCons :: Belongs x ys xys -> Permutation xs ys -> Permutation (x ': xs) xys
With a Permutation, we can now permute a record:
data Rec :: (u -> *) -> [u] -> * where
RNil :: Rec f '[]
(:&) :: !(f r) -> !(Rec f rs) -> Rec f (r ': rs)
insertRecord :: Belongs x ys zs -> f x -> Rec f ys -> Rec f zs
insertRecord BelongsHere v rs = v :& rs
insertRecord (BelongsThere b) v (r :& rs) = r :& insertRecord b v rs
permute :: Permutation xs ys -> Rec f xs -> Rec f ys
permute PermutationEmpty RNil = RNil
permute (PermutationCons b pnext) (r :& rs) = insertRecord b r (permute pnext rs)
This works fine. However, permute is O(n^2) where n is the length of the record. I'm wondering if there is a way to get it to be any faster by using a different data type to represent a permutation.
For comparison, in a mutable and untyped setting (which I know is a very different setting indeed), we could apply a permutation to a heterogeneous record like this in O(n) time. You represent the record as an array of values and the permutation as an array of new positions (no duplicates are allowed and all digits must be between 0 and n). Applying the permutation is just iterating that array and indexing into the record's array with those positions.
I don't expect that an O(n) permutation is possible in a more rigorously typed settings. But it seems like O(n*log(n)) might be possible. I appreciate any feedback, and let me know if I need to clarify anything. Also, answers to this can use Haskell, Agda, or Idris depending on what it feels easier to communicate with.
A faster simple solution is to compare the sorted permutation of the permutations.
Given permutation A and B.
Then there exist the sorted permutations,
As = sort(A)
Bs = sort(B)
As is a permutation of A and Bs is a permutation of B.
If As == Bs then A is a permutation of B.
Thus the order of this algorithm is O(n log(n)) < O(n²)
And this is leading to the optimal solution.
Using a different storage of permutation yields O(n)
Using the statements from above, we are changing the storage format of each permutation into
the sorted data
the original unsorted data
To determine if a list is a permutation of another one, simple a comparison of the sorted data is necessary -> O(n).
This answers the question correctly, but the effort is hidden in creating the doubled data storage ^^ So it will depend on the use if this is a real advantage or not.

Resources