If f ≠ ω(g), does f = O(g)? - algorithm

I'm stuck proving or disproving this statement:
If f ≠ ω(g), then f = O(g)
Intuitively, I think that the statement is false, however, I can't figure out a valid counterexample.
My thought is that we know that f is not bounded from below by a function of g, but that tells us nothing about an upper bound.
Any thoughts? Hints in the right direction?

As a hint, this statement is false. Think about two functions that oscillate back and forth, where each function overtakes the other over and over again. That would make f ≠ ω(g), because f is repeatedly dominated by g, and would make f ≠ O(g) because f repeatedly dominates g.
You'll need to find concrete choices of f and g that make this work and formally establish that f ≠ ω(g) and f ≠ O(g) to formalize this, and I'll leave that as an exercise.
Hope this helps!

Related

Asymptotic analysis of agda functions inside agda

Is it possible to asymptotically analyze agda functions' runtime or memory inside agda itself? I'm trying to come up with something like this. Suppose I have this agda function:
data L : Nat → Set where
[] : L zero
list : ∀ {n} → Nat → L n → L (suc n)
f : ∀ {n} → L n → Nat
f [] = 0
f (list x xs) = f xs
I want to prove a theorem in agda that'd ultimately mean something like f ∈ O[n]. However, this is rather hard since I now need to prove something about the implementation of f rather than its type. I tried using some reflection and metaprogramming, but without much success. I guess the algorithm I have in my mind is something like getting all terms of f one by one like lisp.
The biggest trouble is, obviously, O[n] is not well defined, I need to be able to construct this class from n of L n. Then, I need f as f : L n -> Nat so that thm : ∀ {n} -> (f : L n -> Nat) -> f ∈ (O n). But then f is now not bound to the f we're interest in. Instead, it's any function f from a list to a natural. Therefore, this is clearly false, so it cannot be proven.
Is there a way to prove something like this?

Why are the set of variables in lambda calculus typically defined as countable infinite?

When reading formal descriptions of the lambda calculus, the set of variables seems to always be defined as countably infinite. Why this set cannot be finite seems clear; defining the set of variables as finite would restrict term constructions in unacceptable ways. However, why not allow the set to be uncountably infinite?
Currently, the most sensible answer to this question I have received is that choosing a countably infinite set of variables implies we may enumerate variables making the description of how to choose fresh variables, say for an alpha rewrite, natural.
I am looking for a definitive answer to this question.
Most definitions and constructs in maths and logic include only the minimal apparatus that is required to achieve the desired end. As you note, more than a finite number of variables may be required. But since no more than a countable infinity is required, why allow more?
The reason that this set is required to be countable is quite simple. Imagine that you had a bag full of the variables. There would be no way to count the number of variables in this bag unless the set was denumerable.
Note that bags are isomorphic to sacks.
Uncountable collections of things seem to usually have uncomputable elements. I'm not sure that all uncountable collections have this property, but I strongly suspect they do.
As a result, you could never even write out the name of those elements in any reasonable way. For example, unlike a number like pi, you cannot have a program that writes out the digits Chaitin's constant past a certain finite number of digits. The set of computable real numbers is countably infinite, so the "additional" reals you get are uncomputable.
I also don't believe you gain anything from the set being uncountably infinite. So you would introduce uncomputable names without benefit (as far as I can see).
Having a countable number of variables, and a computable bijection between them and ℕ, lets us create a bijection between Λ and ℕ:
#v = ⟨0, f(v)⟩, where f is the computable bijection between 𝕍 and ℕ (exists because 𝕍 is countable) and ⟨m, n⟩ is a computable bijection between ℕ2 and ℕ.
#(L M) = ⟨1, ⟨#L, #M⟩⟩
#(λv. L) = ⟨2, ⟨#v, #L⟩⟩
The notation ⌜L⌝ represents c_{#L}, the church numeral representing the encoding of L. For all sets S, #S represents the set {#L | L ∈ S}.
This allows us to prove that lambda calculus is not decidable:
Let A be a non-trivial (not ∅ or Λ) set closed under α and β equality (if L ∈ A and L β= M, M ∈ A). Let B be the set {L | L⌜L⌝ ∈ A}. Assume that set #A is recursive. Then f, for which f(x) = 1 if x ∈ A and 0 if x ∉ A, must be a μ-recursive function. All μ-recursive functions are λ-definable*, so there must be an F for which:
F⌜L⌝ = c_1 ⇔ ⌜L⌝ ∈ A
F⌜L⌝ = c_0 ⇔ ⌜L⌝ ∉ A
By letting G ≡ λn. iszero (F ⟨1, ⟨n, #n⟩⟩) M_0 M_1, where M_0 is any λ-term in B and M_1 is any λ-term not in B. Note that #n is computable and therefore λ-definable.
Now just ask the question "Is G⌜G⌝ in B?". If yes, then G⌜G⌝ = M_1 ∉ B, so G⌜G⌝ could not have been in B (remember that B is closed under β=). If no, then G⌜G⌝ = M_0 ∈ B, so it must have been in B.
This is a contradiction, so A could not have been recursive, therefore no closed-under-β= non-trivial set is recursive.
Note that {L | L β= true} is closed under β= and non-trivial, so it is therefore not recursive. This means lambda calculus is not decidable.
* The proof that all computable functions are λ-definable (we can have a λ-term F such that F c_{n1} c_{n2} ... = c_{f(n1, n2, ...)}), as well as the proof in this answer, can be found in "Lambda Calculi With Types" by Henk Barendregt (section 2.2).

Big Theta Proof

I got a practice exam question here asking if the following is true/false.
Let f , g, and h be functions from the natural numbers to the positive
real numbers. Then if g is an element of Big Omega( f ) and g is an element of O(h), and f is an element of O(h) then g is an element of big Theta (h)
I got false for this but it is kind of confusing me now because I don't exactly know what Big omega(f) is.
Can someone clarify if my answer to this question is correct / if not, where I went wrong (and explain if possible please).
Thanks.
Check the link I mentioned in the comment. g is an element of big Theta (h) <=> g is bounded both above and below by h, which is not the case. From your post it's only can be deduced that g is bounded above by h. So "false" is correct answer.

Big O complexity notation

I have a question regarding big O notation.
If g(n)=O(f(n)) and h(n)=O(f(n)) is g(n)=O(h(n))?
Is this allways true, sometimes true or allways false?
Thanks
In words: if g is bounded by f and h is bounded by f, is g bounded by h?
From this we can see that the conclusion doesn't follow from the premises. You can construct a counterexample by choosing f, g, and h in such a way that the premises hold but the conclusion does not.

Big Omega notation - what is f = Ω(g)?

I've been trying for the better part of an hour to find reference to the following:
f = Ω(g)
But I have had no luck at all. I need to answer a question for an assignment and I can't find references.
The assignment is basically asking me to indicate what it (f = Ω(g)) means, in the context of the following choices:
f = Ω(g(n))
g = o(ln n)
g = o(g(n))
g = O(f)
f = O(g)
Initially, I thought that perhaps there is an error in the question.
I know option 1 is wrong and assume option 5 is also wrong, but after an hour online I couldn't figure out which one is the answer.
Can someone please explain to me how to figure this out? I realize that might mean giving me the answer so it can be explained, but I'm more interested in why one of these answers are correct.
"f = Ω(g) means "f is bounded below by g asymptotically". f = O(g) means "f is bounded above by g asymptotically" as per the comments.
If a river's upper bound is a bridge, what's a bridge's lower bound? The river.
I would suggest d
(For completeness, the "little" versions of these imply a very strong difference in growth.)

Resources