Big O notation's alternative definition - algorithm

I know the definition of big O is:
g(n) = O(f(n)) if and only if for some constants c and n0,
|g(n)| <= c.|f(n)| for all n>n0
All I want to know is why this alternative definition is wrong:
g(n) = O(f(n)) if and only if |g(n)/f(n)| is bounded from above as n → ∞,
I guess it is because f(n) may approach 0 and division by 0 is not defined, but I would like to see an example (I couldn't find any one). Please tell me if I'm on the right path.
I hope you can help me.

In short, your alternative definition is right for every f(x) that is not 0 for every x>x0 for some x0. Checkout the formal definition in Wikipedia.
To see it for ourselves, let's try proving the two definitions are equivalent, and we'll see the special case arising naturally:
If the first definite holds, then there is a c and a n0 as described. To get to the second definition we will want to divide the first definition by |f(n)|. To do so we need to assume it is not 0 for any n>n0 so let's assume that and keep in mind that if the function does evaluate to 0 we need to treat it differently (|f(n)|=0 <=> f(n)=0) and there's our special case. Now that we assumed that, we can divide and get |g(n)/f(n)|<=c<inf for n>n0 which is the second definition.
If on the other hand we start with the second definition, we know that lim sup (for n->inf) |g(n)/f(x)|<inf'. We can also be sure it exists from the definition as the group of values for the left hand side of the equation is defined and bounded from above (again, assumingf(n)dues not equal 0) forn>n0for somen0. Let's call the limit supcand multiply by|f(n)|` and we get the first definition.
So all in all they are equivalent for f(n)-s that do not equal zero for all n>n0 for some n0.

Related

Coq `simpl` reduces `S n + m` to `S(n + m)` for free?

I'm just beginning to learn Coq via software foundations. One of the homework Theorems (with my successful proof elided) in Induction.v is:
Theorem plus_n_Sm : forall n m : nat,
S (n + m) = n + (S m).
Proof.
(* elided per request of authors *)
Qed.
Later, I noticed that the following similar "leftward" statement comes for free with the built-in tactic .simpl:
Example left_extract : forall n m : nat, S n + m = S (n + m).
Proof.
intros. simpl. reflexivity.
Qed.
I've perused the documentation and haven't been able to figure out why .simpl gives us one direction "for free" but the other direction requires a user-supplied proof. The documentation is over my head at this very early point in my learning.
I guess it has something to do with left-ness being built-in and right-ness being not, but the propositions seem to my childlike eyes to be of equal complexity and subtlety. Would someone be so kind as to explain why, and perhaps give me some guidance about what is going on with .simpl?
Why should I NOT be surprised by my finding?
What other good things can I expect from .simpl, so it surprises me less and so I can eventually predict what it's going to do and rely on it?
What's the best way to wade through the theory -- unfolding of iota reductions and what not -- to focus on the relevant bits for this phenomenon? Or do I have to learn all the theory before I can understand this one bit?
I believe your surprise stems from the fact that you are accustomed to think of addition as a primitive concept. It is not, it is a function that has been defined and other primitive concepts are used to explain its behavior.
The addition function is defined by a function with a name written with letters (not the + symbol) in a way that looks like this:
Fixpoint add (n m : nat) : nat :=
match n with
| 0 =>
| S p => S (add p)
end.
You can find this information by typing
Locate "_ + _".
The + notation is used for two functions, only one of them can be applied on numbers.
Coming back to the add function, its very description explains that add 0 m computes to m and add (S n) m computes to S (add m n), but it does not say anything when the second argument has the form S m, it is just not written anywhere. Still the theory of Coq makes it possible to prove the fact.
So the equality that is given for free comes directly from the definition of addition. There are a few other equality statements that are natural to the human eye, but not given for free. They can just be proved by induction.
Would it be possible to design the Coq system in such a way that both equality statements would be given for free? The answer is probably yes, but the theory must be designed carefully for that.

Landau Notation/Big O notation

In our class the following exercise/example was given:
Compute n_0 and c from the formal definition of each Landau symbol to show that :
2^100n belongs O(n^2).
Then in the Solution the following was done:
n_0=2^100 and c=1.
Show for each n>n_0: 2^100*n=<n^2.
It is true that: n_0^2=2^100n_0 and for all n>2^100: n^2-2^100n>n^2 -nn=n^2 - n^2=0.
I have some questions:
We are looking for n_0 and c, but somehow we give values to them? And why those values in particular? Why can't n_0=2? and c=34? Is there a logic behind all of this?
In the last part, I don't see how that expression proves anything, it looks redundant
If you read the definition of big-O notation, it is precisely so that if you can find one such n0 and c that the inequality holds for all numbers greater than n0, then the big-O relation holds.
Of course you can choose another n0, for example a bigger one. As long as you find one, you have proven the relation.
Although for better help with such questions, I recommend Math.stachexchange or CS.stakhexchange.

Prove for 928675*2^n=0(2^n) Big-0notation complexity

I am supposed to Prove that 92675*2^n=0(2^n) and use the mathematical definition of 0(f(n)). I came up with following answer not sure if this is the right way to approach it though
Answer: Since 92875 is a constant, we can replace it with K and F(n)=K+2n therefore O(f(n)=O(K+2n) and since K is a constant it can be taken away from the formula and we are therefore left with O(f(n)=O(2n)
Can someone please confirm if this is right or not?
Thanks in advance
Edit: Just realized that I wrote + instead of * and forgot a couple of ^ signs
Answer: Since 92675 is a constant, we can replace it with K and F(n)=K*2^n therefore O(f(n)=O(K*2^n) and since K is a constant it can be taken away from the formula and we are therefore left with O(f(n)=O(2n)
You are supposed to prove exactly that proposition (O(f(n))=O(K*2^n)). You can't use it to prove itself.
The definition of f(x) is O(g(x)) is that, for some constant real numbers k and x_0, |f(x)| <= |k*g(x)| for x>=x_0.
That's why if f(x) = k*g(x) we can say that f(x) is O(g(x)) (|k*g(x)| <= |k*g(x)| for any x). In special, it is also true for g(x)=2^x and k=928675.

Prove or Disprove quantifiers (propositions logic)

What approach can i take to solve these question:
Prove or disprove the following statements. The universe of discourse is N = {1,2,3,4,...}.
(a) ∀x∃y,y = x·x
(b) ∀y∃x,y = x·x
(c) ∃y∀x,y = x·x.
The best way to solve such problems is first to think about them until you're confident that they can be either proven or disproven.
If they can be disproven, then all you have to do to disprove the statement is provide a counterexample. For instance, for b, I can think of the counterexample y=2. There is no number x in N for which n*n = 2. Thus, there is a counterexample, and the statement is false.
If the statement appears to be true, it may be necessary to use some axioms or tautologies to prove the statment. For instance, it is known that two integers that are multiplied together will always produce another integer.
Hopefully this is enough of an approach to get you going.
To prove something exists, find one example for which it is true.
To prove ∀x F(x), take an arbitrary constant a and prove F(a) is true.
Counterexamples can be used to disprove ∀ statements, but not ∃ statements. To disprove ∃x F(x), prove that ∀x !F(x). So, take an arbitrary constant a and show that F(a) is false.

How to prove forall n:nat, ~n<n in Coq?

I've been confused for hours and I cannot figure out how to prove
forall n:nat, ~n<n
in Coq. I really need your help. Any suggestions?
This lemma is in the standard library:
Require Import Arith.
Lemma not_lt_refl : forall n:nat, ~n<n.
Print Hint.
Amongst the results is lt_irrefl. A more direct way of realizing that is
info auto with arith.
which proves the goal and shows how:
intro n; simple apply lt_irrefl.
Since you know where to find a proof, I'll just give a hint on how to do it from first principles (which I suppose is the point of your homework).
First, you need to prove a negation. This pretty much means you push n<n as a hypothesis and prove that you can deduce a contradiction. Then, to reason on n<n, expand it to its definition.
intros h H.
red in H. (* or `unfold lt in H` *)
Now you need to prove that S n <= n cannot happen. To do this from first principles, you have two choices at that point: you can try to induct on n, or you can try to induct on <=. The <= predicate is defined by induction, and often in these cases you need to induct on it — that is, to reason by induction on the proof of your hypothesis. Here, though, you'll ultimately need to reason on n, to show that n cannot be an mth successor of S n, and you can start inducting on n straight away.
After induction n, you need to prove the base case: you have the hypothesis 1 <= 0, and you need to prove that this is impossible (the goal is False). Usually, to break down an inductive hypothesis into cases, you use the induction tactic or one of its variants. This tactic constructs a fairly complex dependent case analysis on the hypothesis. One way to see what's going on is to call simple inversion, which leaves you with two subgoals: either the proof of the hypothesis 1 <= 0 uses the le_n constructor, which requires that 1 = 0, or that proof uses the le_S constructor, which requires that S m = 0. In both cases, the requirement is clearly contradictory with the definition of S, so the tactic discriminate proves the subgoal. Instead of simple inversion H, you can use inversion H, which in this particular case directly proves the goal (the impossible hypothesis case is very common, and it's baked into the full-fledged inversion tactic).
Now, we turn to the induction case, where we quickly come to the point where we would like to prove S n <= n from S (S n) <= S n. I recommend that you state this as a separate lemma (to be proved first), which can be generalized: forall n m, S n <= S m -> n <= m.
Require Import Arith.
auto with arith.

Resources