Asymptotic analysis of agda functions inside agda - asymptotic-complexity

Is it possible to asymptotically analyze agda functions' runtime or memory inside agda itself? I'm trying to come up with something like this. Suppose I have this agda function:
data L : Nat → Set where
[] : L zero
list : ∀ {n} → Nat → L n → L (suc n)
f : ∀ {n} → L n → Nat
f [] = 0
f (list x xs) = f xs
I want to prove a theorem in agda that'd ultimately mean something like f ∈ O[n]. However, this is rather hard since I now need to prove something about the implementation of f rather than its type. I tried using some reflection and metaprogramming, but without much success. I guess the algorithm I have in my mind is something like getting all terms of f one by one like lisp.
The biggest trouble is, obviously, O[n] is not well defined, I need to be able to construct this class from n of L n. Then, I need f as f : L n -> Nat so that thm : ∀ {n} -> (f : L n -> Nat) -> f ∈ (O n). But then f is now not bound to the f we're interest in. Instead, it's any function f from a list to a natural. Therefore, this is clearly false, so it cannot be proven.
Is there a way to prove something like this?

Related

Lean4: Proving that `(xs = ys) = (reverse xs = reverse ys)`

Intuitively stating that xs is equal to ys is the same as saying that the respective reverse lists are equal to each other.
I'm currently learning Lean 4 and so I set myself the following exercise. I want to prove the following theorem in Lean 4:
theorem rev_eq (xs ys : List α) : (xs = ys) = (reverse xs = reverse ys)
However, I am not sure, whether this theorem can actually be proven in Lean 4. If not, why can it not be proven?
The closest I could get so far is proving the claim under the assumption that xs = ys:
theorem rev_eq' (xs ys : List α) :
xs = ys -> (xs = ys) = (reverse xs = reverse ys) := by
intros h
rw [h]
simp
Maybe, if one could prove that the claim also holds if one assumes that xs is not equal to ys, then the original theorem would follow. I got stuck on that route as well though.
Any ideas?
In Lean, it is usually idiomatic to state the equality of propositions using Iff, which is equivalent under the axiom propext. From there, iff is an inductive type with two sides, one direction that you have proved, and the other direction that follows from induction. (This theorem is still provable without this axiom, fwiw)
But I'd recommend you to do induction xs and induction ys and then look at the goals. Two should be impossible and Lean should show you they are contradictions (or indeed you'll get a that can be simplified to false = false) and two trivially true goals. Make sure to expand the definition of reverse.
This property is typically proved by induction. I never used lean but once induction on the list is done then it should be easy. You may need a lemma saying that (xs = ys) -> xs#l = xs#l (also by induction I guess, # stands for list concatenation).

How in Coq to use mod arithmetic (specifically Zplus_mod theorem) for natural numbers?

I want to apply the library theorem:
Theorem Zplus_mod: forall a b n, (a + b) mod n = (a mod n + b mod n) mod n.
where a b n are expected to have the type Z.
I have a subexpression (a + b) mod 3 in my goal, with a b : nat.
rewrite Zplus_mod gives an error Found no subterm matching
rewrite Zplus_mod with (a := a) gives an error "a" has type "nat" while it is expected to have type "Z".
Since natural numbers are also integers, how to use Zplus_mod theorem for nat arguments?
You can't apply this theorem, because the notation mod refers to a function on natural numbers Nat.modulo in a context where you are using natural numbers, while the notation mod refers to Z.modulo when you are referring to integers of type Z.
Using the Search command ou can search specifically for theorems about Nat.modulo and (_ + _)%nat and you will see that some existing theorems are actually close to your needs (Nat.add_mod_idemp_l and Nat.add_mod_idemp_r).
You can also look for a theorem that links Z.modulo and Nat.modulo. This gives mod_Zmod. But this forces you to work in the type of integers:
Require Import Arith ZArith.
Search Z.modulo Nat.modulo.
(* The answer is :
mod_Zmod: forall n m, m <> 0 -> Z.of_nat (n mod m) =
(Z.of_nat n mod Z.of_nat m)%Z *)
One way out is to find a theorem that tells you that the function Z.of_nat is injective. I found it by typing the following command.
Search Z.of_nat "inj".
In the long list that was produced, the relevant theorem is Nat2Z.inj, you
then need to show how Z.of_nat interacts with all of the operators involved. Most of these theorems require n to be non-zero, so I add this as a condition. Here is the example.
Lemma example (a b n : nat) :
n <> 0 -> (a + b) mod n = (a mod n + b mod n) mod n.
Proof.
intro nn0.
apply Nat2Z.inj.
rewrite !mod_Zmod; auto.
rewrite !Nat2Z.inj_add.
rewrite !mod_Zmod; auto.
rewrite Zplus_mod.
easy.
Qed.
This answers your question, but frankly, I believe you would be better off using the lemmas Nat.add_mod_idemp_l and Nat.add_mod_idemp_r.

Provably correct permutation in less than O(n^2)

Written in Haskell, here is the data type that proves that one list is a permutation of another:
data Belongs (x :: k) (ys :: [k]) (zs :: [k]) where
BelongsHere :: Belongs x xs (x ': xs)
BelongsThere :: Belongs x xs xys -> Belongs x (y ': xs) (y ': xys)
data Permutation (xs :: [k]) (ys :: [k]) where
PermutationEmpty :: Permutation '[] '[]
PermutationCons :: Belongs x ys xys -> Permutation xs ys -> Permutation (x ': xs) xys
With a Permutation, we can now permute a record:
data Rec :: (u -> *) -> [u] -> * where
RNil :: Rec f '[]
(:&) :: !(f r) -> !(Rec f rs) -> Rec f (r ': rs)
insertRecord :: Belongs x ys zs -> f x -> Rec f ys -> Rec f zs
insertRecord BelongsHere v rs = v :& rs
insertRecord (BelongsThere b) v (r :& rs) = r :& insertRecord b v rs
permute :: Permutation xs ys -> Rec f xs -> Rec f ys
permute PermutationEmpty RNil = RNil
permute (PermutationCons b pnext) (r :& rs) = insertRecord b r (permute pnext rs)
This works fine. However, permute is O(n^2) where n is the length of the record. I'm wondering if there is a way to get it to be any faster by using a different data type to represent a permutation.
For comparison, in a mutable and untyped setting (which I know is a very different setting indeed), we could apply a permutation to a heterogeneous record like this in O(n) time. You represent the record as an array of values and the permutation as an array of new positions (no duplicates are allowed and all digits must be between 0 and n). Applying the permutation is just iterating that array and indexing into the record's array with those positions.
I don't expect that an O(n) permutation is possible in a more rigorously typed settings. But it seems like O(n*log(n)) might be possible. I appreciate any feedback, and let me know if I need to clarify anything. Also, answers to this can use Haskell, Agda, or Idris depending on what it feels easier to communicate with.
A faster simple solution is to compare the sorted permutation of the permutations.
Given permutation A and B.
Then there exist the sorted permutations,
As = sort(A)
Bs = sort(B)
As is a permutation of A and Bs is a permutation of B.
If As == Bs then A is a permutation of B.
Thus the order of this algorithm is O(n log(n)) < O(n²)
And this is leading to the optimal solution.
Using a different storage of permutation yields O(n)
Using the statements from above, we are changing the storage format of each permutation into
the sorted data
the original unsorted data
To determine if a list is a permutation of another one, simple a comparison of the sorted data is necessary -> O(n).
This answers the question correctly, but the effort is hidden in creating the doubled data storage ^^ So it will depend on the use if this is a real advantage or not.

Proving equivalence of well-founded recursion

In answer to this question Assisting Agda's termination checker the recursion is proven to be well-founded.
Given the function defined like so (and everything else like in Vitus's answer there):
f : ℕ → ℕ
f n = go _ (<-wf n)
where
go : ∀ n → Acc n → ℕ
go zero _ = 0
go (suc n) (acc a) = go ⌊ n /2⌋ (a _ (s≤s (/2-less _)))
I cannot see how to prove f n == f ⌊ n /2⌋. (My actual problem has a different function, but the problem seems to boil down to the same thing)
My understanding is that go gets Acc n computed in different ways. I suppose, f n can be shown to pass Acc ⌊ n /2⌋ computed by a _ (s≤s (/2-less _)), and f ⌊ n /2⌋ passes Acc ⌊ n /2⌋ computed by <-wf ⌊ n /2⌋, so they cannot be seen identical.
It seems to me proof-irrelevance must be used somehow, to say that it's enough to just have an instance of Acc n, no matter how computed - but any way I try to use it, it seems to contaminate everything with restrictions (eg pattern matching doesn't work, or irrelevant function cannot be applied, etc).

Are there propositions that can be proved in classical logic but not in Agda

Unless I'm mistaken, there is no proof for
∀ {A : Set} → ¬ (¬ A) → A
in Agda.
This means you cannot use proofs by contradiction.
Many Maths textbooks use those kinds of proofs, so I was wondering: is it always possible to find an alternative constructive proof? Could you write, e.g., an Algebra textbook using only constructive logic?
In case the answer is no. Does this mean constructive logic is in some sense less powerful then classical logic?
Indeed, double negation elimination (and other statements which are logically equivalent to that) cannot be proven in Agda.
-- Law of excluded middle
lem : ∀ {p} {P : Set p} → P ⊎ ¬ P
-- Double negation elimination
dne : ∀ {p} {P : Set p} → ¬ ¬ P → P
-- Peirce's law
peirce : ∀ {p q} {P : Set p} {Q : Set q} →
((P → Q) → P) → P
(If you want, you can show that these are indeed logically equivalent, it's an interesting exercise). But this is a consequence we cannot avoid - one of the important things about constructive logic is that proofs have a computational context. However, assuming law of excluded middle basically kills any computational context.
Consider for example the following proposition:
end-state? : Turing → Set
end-state? t = ...
simulate_for_steps : Turing → ℕ → Turing
simulate t for n steps = ...
Terminates : Turing → Set
Terminates machine = Σ ℕ λ n →
end-state? (simulate machine for n steps)
So, a Turing machine terminates if there exists a number n such that after n steps, the machine is in an end state. Sounds reasonable, right? What happens when we add excluded middle in the mix?
terminates? : Turing → Bool
terminates? t with lem {P = Terminates t}
... | inj₁ _ = true
... | inj₂ _ = false
If we have excluded middle, then any proposition is decidable. Which also means that we can decide whether a Turing machine terminates or not and we've solved the halting problem. So we can either have computability or classical logic, but not both! While excluded middle and other equivalent statements help us with proofs, it comes at the cost of computational meaning of the program.
So yes, in this sense, constructive logic is less powerful than classical. However, we can simulate classical logic via double negation translation. Notice that doubly negated versions of the previous principles hold in Agda:
¬¬dne : ∀ {p} {P : Set p} → ¬ ¬ (¬ ¬ P → P)
¬¬dne f = f λ g → ⊥-elim (g (f ∘ const))
¬¬lem : ∀ {p} {P : Set p} → ¬ ¬ (P ⊎ ¬ P)
¬¬lem f = f (inj₂ (f ∘ inj₁))
If we were in classical logic, you would then use double negation elimination to get the original statements. There's even a monad dedicated to this transformation, take a look at the double negation monad in the Relation.Nullary.Negation module (in the standard library).
What this means is that we can selectively use the classical logic. From certain point of view, constructive logic is more powerful than classical precisely because of this. In classical logic, you cannot opt out of these statements, they just are there. On the other hand, constructive logic doesn't force you to use these, but if you need them, you can "enable" them in this way.
Another statement which cannot be proven in Agda is function extensionality. But unlike with classical statements, this one is desirable in constructive logics.
ext : ∀ {a b} {A : Set a} {B : A → Set b}
(f g : ∀ x → B x) → (∀ x → f x ≡ g x) → f ≡ g
However, this doesn't mean that it doesn't hold in constructive logic. It's just a property of the theory on which Agda is based (which is mostly intensional type theory with axiom K), there are other flavors of type theory where this statement holds, for example the usual formulations of extensional type theory or Conor McBride's and Thorsten Altenkirch's observational type theory.

Resources