Landau Notation/Big O notation - algorithm

In our class the following exercise/example was given:
Compute n_0 and c from the formal definition of each Landau symbol to show that :
2^100n belongs O(n^2).
Then in the Solution the following was done:
n_0=2^100 and c=1.
Show for each n>n_0: 2^100*n=<n^2.
It is true that: n_0^2=2^100n_0 and for all n>2^100: n^2-2^100n>n^2 -nn=n^2 - n^2=0.
I have some questions:
We are looking for n_0 and c, but somehow we give values to them? And why those values in particular? Why can't n_0=2? and c=34? Is there a logic behind all of this?
In the last part, I don't see how that expression proves anything, it looks redundant

If you read the definition of big-O notation, it is precisely so that if you can find one such n0 and c that the inequality holds for all numbers greater than n0, then the big-O relation holds.
Of course you can choose another n0, for example a bigger one. As long as you find one, you have proven the relation.
Although for better help with such questions, I recommend Math.stachexchange or CS.stakhexchange.

Related

Coq `simpl` reduces `S n + m` to `S(n + m)` for free?

I'm just beginning to learn Coq via software foundations. One of the homework Theorems (with my successful proof elided) in Induction.v is:
Theorem plus_n_Sm : forall n m : nat,
S (n + m) = n + (S m).
Proof.
(* elided per request of authors *)
Qed.
Later, I noticed that the following similar "leftward" statement comes for free with the built-in tactic .simpl:
Example left_extract : forall n m : nat, S n + m = S (n + m).
Proof.
intros. simpl. reflexivity.
Qed.
I've perused the documentation and haven't been able to figure out why .simpl gives us one direction "for free" but the other direction requires a user-supplied proof. The documentation is over my head at this very early point in my learning.
I guess it has something to do with left-ness being built-in and right-ness being not, but the propositions seem to my childlike eyes to be of equal complexity and subtlety. Would someone be so kind as to explain why, and perhaps give me some guidance about what is going on with .simpl?
Why should I NOT be surprised by my finding?
What other good things can I expect from .simpl, so it surprises me less and so I can eventually predict what it's going to do and rely on it?
What's the best way to wade through the theory -- unfolding of iota reductions and what not -- to focus on the relevant bits for this phenomenon? Or do I have to learn all the theory before I can understand this one bit?
I believe your surprise stems from the fact that you are accustomed to think of addition as a primitive concept. It is not, it is a function that has been defined and other primitive concepts are used to explain its behavior.
The addition function is defined by a function with a name written with letters (not the + symbol) in a way that looks like this:
Fixpoint add (n m : nat) : nat :=
match n with
| 0 =>
| S p => S (add p)
end.
You can find this information by typing
Locate "_ + _".
The + notation is used for two functions, only one of them can be applied on numbers.
Coming back to the add function, its very description explains that add 0 m computes to m and add (S n) m computes to S (add m n), but it does not say anything when the second argument has the form S m, it is just not written anywhere. Still the theory of Coq makes it possible to prove the fact.
So the equality that is given for free comes directly from the definition of addition. There are a few other equality statements that are natural to the human eye, but not given for free. They can just be proved by induction.
Would it be possible to design the Coq system in such a way that both equality statements would be given for free? The answer is probably yes, but the theory must be designed carefully for that.

Big O notation's alternative definition

I know the definition of big O is:
g(n) = O(f(n)) if and only if for some constants c and n0,
|g(n)| <= c.|f(n)| for all n>n0
All I want to know is why this alternative definition is wrong:
g(n) = O(f(n)) if and only if |g(n)/f(n)| is bounded from above as n → ∞,
I guess it is because f(n) may approach 0 and division by 0 is not defined, but I would like to see an example (I couldn't find any one). Please tell me if I'm on the right path.
I hope you can help me.
In short, your alternative definition is right for every f(x) that is not 0 for every x>x0 for some x0. Checkout the formal definition in Wikipedia.
To see it for ourselves, let's try proving the two definitions are equivalent, and we'll see the special case arising naturally:
If the first definite holds, then there is a c and a n0 as described. To get to the second definition we will want to divide the first definition by |f(n)|. To do so we need to assume it is not 0 for any n>n0 so let's assume that and keep in mind that if the function does evaluate to 0 we need to treat it differently (|f(n)|=0 <=> f(n)=0) and there's our special case. Now that we assumed that, we can divide and get |g(n)/f(n)|<=c<inf for n>n0 which is the second definition.
If on the other hand we start with the second definition, we know that lim sup (for n->inf) |g(n)/f(x)|<inf'. We can also be sure it exists from the definition as the group of values for the left hand side of the equation is defined and bounded from above (again, assumingf(n)dues not equal 0) forn>n0for somen0. Let's call the limit supcand multiply by|f(n)|` and we get the first definition.
So all in all they are equivalent for f(n)-s that do not equal zero for all n>n0 for some n0.

what is the variable 'C' refers to in Big O or Omega notation

In Big O or Omega notation, I understand that n refers to the input to the program. But what is the variable C refers to?
While it's hard to answer this question without knowing where you saw a C in discussion of big O notation, I suspect it was used to represent a constant of some kind.
For instance, you can use C in translating a statement using Big-O notation to a statement using predicate logic terminology:
f(x) = O(g(x)) means:
There exist positive real numbers C and x0, such that for all x >= x0, f(x) <= C * g(x)
The choice of C for the name of the constant multiple here is completely arbitrary. C is probably popular simply because it's the first letter of "constant". At most, it's a convention.
You could use some other letter and the meaning would be the same. The Wikipedia page on the topic (at the time I'm writing this) uses M in most of its equations (though C sneaks into a few of them further down the page). It's entirely possible you saw C in one description of big-O notation, but then read some other description of it that didn't use C at all.

Prove for 928675*2^n=0(2^n) Big-0notation complexity

I am supposed to Prove that 92675*2^n=0(2^n) and use the mathematical definition of 0(f(n)). I came up with following answer not sure if this is the right way to approach it though
Answer: Since 92875 is a constant, we can replace it with K and F(n)=K+2n therefore O(f(n)=O(K+2n) and since K is a constant it can be taken away from the formula and we are therefore left with O(f(n)=O(2n)
Can someone please confirm if this is right or not?
Thanks in advance
Edit: Just realized that I wrote + instead of * and forgot a couple of ^ signs
Answer: Since 92675 is a constant, we can replace it with K and F(n)=K*2^n therefore O(f(n)=O(K*2^n) and since K is a constant it can be taken away from the formula and we are therefore left with O(f(n)=O(2n)
You are supposed to prove exactly that proposition (O(f(n))=O(K*2^n)). You can't use it to prove itself.
The definition of f(x) is O(g(x)) is that, for some constant real numbers k and x_0, |f(x)| <= |k*g(x)| for x>=x_0.
That's why if f(x) = k*g(x) we can say that f(x) is O(g(x)) (|k*g(x)| <= |k*g(x)| for any x). In special, it is also true for g(x)=2^x and k=928675.

Prove or Disprove quantifiers (propositions logic)

What approach can i take to solve these question:
Prove or disprove the following statements. The universe of discourse is N = {1,2,3,4,...}.
(a) ∀x∃y,y = x·x
(b) ∀y∃x,y = x·x
(c) ∃y∀x,y = x·x.
The best way to solve such problems is first to think about them until you're confident that they can be either proven or disproven.
If they can be disproven, then all you have to do to disprove the statement is provide a counterexample. For instance, for b, I can think of the counterexample y=2. There is no number x in N for which n*n = 2. Thus, there is a counterexample, and the statement is false.
If the statement appears to be true, it may be necessary to use some axioms or tautologies to prove the statment. For instance, it is known that two integers that are multiplied together will always produce another integer.
Hopefully this is enough of an approach to get you going.
To prove something exists, find one example for which it is true.
To prove ∀x F(x), take an arbitrary constant a and prove F(a) is true.
Counterexamples can be used to disprove ∀ statements, but not ∃ statements. To disprove ∃x F(x), prove that ∀x !F(x). So, take an arbitrary constant a and show that F(a) is false.

Resources