(Beginner) Questions about the Big O notation - algorithm

I have some questions related to the Big O notation:
n^3 = Big Omega(n^2)
This is true because:
n^3 >= c * n^2 for all n >= n0
-> Lim(n-> infinity) = n^3/n^2 >= c
Then I used L'Hospital and got 6n/2 >= c which is true if I for example choose c as 1 and n0 as 1
Are my thoughts right on this one ?
Now I got two pairs:
log n and n/log n, do they lie in Theta, O or somewhere else ? Just tell me where they lie, then I can do the proof by myself.
n^(log n) and 2^n follows vice versa
And at last:
f(n) = O(n) -> f(n)^2 = O(n^2)
f(n)g(n) = O(f(n)g(n))
The question is: Are these statements correct ?
I would say yes to the first one, I don't really know why and it seems like there is a hidden trick to this, but I don't really know, could someone help me out here ?
The second one should be true if g(n) lies in O(n) ,but I don't really know here either.

Seems like you're right here.
As for log(n) and n/log(n) you can check it by finding the lim log(n)/(n/log(n)) and vice versa.
Using the fact that a^b = e^(b*ln(a)):
n^log(n) = e^(log(n) * log(n)) < e^(n^2) = e^e^(2*log(n)) < (e^e)^(2*n) = O(C^n), and 2^n is also Big O(C^n).
Let's use the definition and some properties of the Big O(f):
O(f) = f * O(1)
O(1) * O(1) = O(1)
Now we have:
f(n)^2 = f(n) * f(n) = O(n) * O(n) = n * O(1) * n * O(1) = n^2 * O(1) = O(n^2).
f(n)g(n) = f(n)g(n) * O(1) = O(f(n)g(n)).
So, yes, it is correct.

Related

The Mathematical Relationship Between Big-Oh Classes

My textbook describes the relationship as follows:
There is a very nice mathematical intuition which describes these classes too. Suppose we have an algorithm which has running time N0 when given an input of size n, and a running time of N1 on an input of size 2n. We can characterize the rates of growth in terms of the relationship between N0 and N1:
Big-Oh Relationship
O(log n) N1 ≈ N0 + c
O(n) N1 ≈ 2N0
O(n²) N1 ≈ 4N0
O(2ⁿ) N1 ≈ (N0)²
Why is this?
That is because if f(n) is in O(g(n)) then it can be thought of as acting like k * g(n) for some k.
So for example if f(n) = O(log(n)) then it acts like k log(n), and now f(2n) ≈ k log(2n) = k (log(2) + log(n)) = k log(2) + k log(n) ≈ k log(2) + f(n) and that is your desired equation with c = k log(2).
Note that this is a rough intuition only. An example of where it breaks down is that f(n) = (2 + sin(n)) log(n) = O(log(n)). The oscillating 2 + sin(n) bit means that f(2n)-f(n) can be basically anything.
I personally find this kind of rough intuition to be misleading and therefore worse than useless. Others find it very helpful. Decide for yourself how much weight you give it.
Basically what they are trying to show is just basic algebra after substituting 2n for n in the functions.
O(log n)
log(2n) = log(2) + log(n)
N1 ≈ c + N0
O(n)
2n = 2(n)
N1 ≈ 2N0
O(n²)
(2n)^2 = 4n^2 = 4(n^2)
N1 ≈ 4N0
O(2ⁿ)
2^(2n) = 2^(n*2) = (2^n)^2
N1 ≈ (N0)²
Since O(f(n)) ~ k * f(n) (almost by definition), you want to look at what happens when you put 2n in for n. In each case:
N1 ≈ k*log 2n = k*(log 2 + log n) = k*log n + k*log 2 ≈ N0 + c where c = k*log 2
N1 ≈ k*(2n) = 2*k*n ≈ 2N0
N1 ≈ k*(2n)^2 = 4*k*n^2 ≈ 4N0
N1 ≈ k*2^(2n) = k*(2^n)^2 ≈ N0*2^n ≈ N0^2/k
So the last one is not quite right, anyway. Keep in mind that these relationships are only true asymptotically, so the approximations will be more accurate as n gets larger. Also, f(n) = O(g(n)) only means g(n) is an upper bound for f(n) for large enough n. So f(n) = O(g(n)) does not necessarily mean f(n) ~ k*g(n). Ideally, you want that to be true, since your big-O bound will be tight when that is the case.

is (n+1)! in the order of (n!)? can you show me a proof?

What about (n-1)!?
Also if you could show me a proof that would help me understand better.
I'm stuck on this one.
To show that (n+1)! is in O(n!) you have to show that there is a constant c so that for all big enough n (n > n0) the inequality
(n+1)! < c n!
holds. However since (n+1)! = (n+1) n! this simplifies to
n+1 < c
which clearly does not hold since c is a constant and n can be arbitrarily large.
On the other hand, (n-1)! is in O(n!). The proof is left as an exercise.
(n+1)! = n! * (n+1)
O((n+1)*n!) = O(nn!+n!) = O(2(nn!)) = O(n*n!) > O(n!)
(n-1)! = n! * n-1
O(n-1)! = O(n!/n) < O(n!)
I wasnt formally introduced to algorithmic complexity so take what I write with a grain of salt
That said, we know n^3 is way worse than n, right?
Well, since (n + 1)! = (n - 1)! * n * (n + 1)
Comparing (n + 1)! to (n - 1)! is like comparing n to n^3
Sorry, I dont have proof but expanding the factorial as above should lead to it

Algorithm complexity, solving recursive equation

I'm taking Data Structures and Algorithm course and I'm stuck at this recursive equation:
T(n) = logn*T(logn) + n
obviously this can't be handled with the use of the Master Theorem, so I was wondering if anybody has any ideas for solving this recursive equation. I'm pretty sure that it should be solved with a change in the parameters, like considering n to be 2^m , but I couldn't manage to find any good fix.
The answer is Theta(n). To prove something is Theta(n), you have to show it is Omega(n) and O(n). Omega(n) in this case is obvious because T(n)>=n. To show that T(n)=O(n), first
Pick a large finite value N such that log(n)^2 < n/100 for all n>N. This is possible because log(n)^2=o(n).
Pick a constant C>100 such that T(n)<Cn for all n<=N. This is possible due to the fact that N is finite.
We will show inductively that T(n)<Cn for all n>N. Since log(n)<n, by the induction hypothesis, we have:
T(n) < n + log(n) C log(n)
= n + C log(n)^2
< n + (C/100) n
= C * (1/100 + 1/C) * n
< C/50 * n
< C*n
In fact, for this function it is even possible to show that T(n) = n + o(n) using a similar argument.
This is by no means an official proof but I think it goes like this.
The key is the + n part. Because of this, T is bounded below by o(n). (or should that be big omega? I'm rusty.) So let's assume that T(n) = O(n) and have a go at that.
Substitute into the original relation
T(n) = (log n)O(log n) + n
= O(log^2(n)) + O(n)
= O(n)
So it still holds.

Division operation on asymptotic notation

Suppose
S(n) = Big-Oh(f(n)) & T(n) = Big-Oh(f(n))
both f(n) identically belongs from the same class.
My ques is: Why S(n)/T(n) = Big-Oh(1) is incorrect?
Consider S(n) = n^2 and T(n) = n. Then both S and T are O(n^2) but S(n) / T(n) = n which is not O(1).
Here's another example. Consider S(n) = sin(n) and T(n) = cos(n). Then S and T are O(1) but S(n) / T(n) = tan(n) is not O(1). This second example is important because it shows that even if you have a tight bound, the conclusion can still fail.
Why is this happening? Because the obvious "proof" completely fails. The obvious "proof" is the following. There are constants C_S and C_T and N_S and N_T where n >= N_S implies |S(n)| <= C_S * f(n) and n >= N_T implies |T(n)| <= C_T * f(n). Let N = max(N_S, N_T). Then for n >= N we have
|S(n) / T(n)| <= (C_S * f(n)) / (C_T * f(n)) = C_S / C_T.
This is completely and utterly wrong. It is not the case that |T(n)| <= C_T * f(n) implies that 1 / |T(n)| <= 1 / (C_T * f(n)). In fact, what is true is that 1 / |T(n)| >= 1 / (C_T * f(n)). The inequality reverses, and that suggests there is a serious problem with the "theorem." The intuitive idea is that if T is "small" (i.e., bounded) then 1 / T is "big." But we're trying to show that 1 / T is "small" and we just can't do that. As our counterexamples show, the "proof" is fatally flawed.
However, there is a theorem here that is true. Namely, if S(n) is O(f(n)) and T(n) is Ω(f(n)), then S(n) / T(n) is O(1). The above "proof" works for this theorem (thanks are due to Simone for the idea to generalize to this statement).
Here's one counter example:
Let's say f(n) = n^2, S(n) = n^2 and T(n) = n. Now both S and T are in O(f(n)) (you have to remember that O(n^2) is a superset of O(n), so everything that's in O(n) is also in O(n^2)), but U(n) = S(n)/T(n) = n^2/n = n is definitely not in O(1).
Like the others explained S(n) / T(n) is not generally O(1).
Your doubt probably derive from the confusion between O and Θ; in fact if:
S(n) = Θ(n) and T(n) = Θ(n)
then the following is true:
S(n) / T(n) = Θ(1) and thus S(n) / T(n) = O(1)
You can prove that this is wrong if you plug some actual functions into your formula, e.g.
S(n) = n^2
T(n) = n
f(n) = n^2
S(n) / T(n) = n
n != O(1)

how to show that 5n=O(nlogn)

I have this as a homework question and don't remember learning it in class. Can someone point me in the right direction or have documentation on how to solve these types of problems?
More formally...
First, we prove that if f(n) = 5n, then f ∈ O(n). In order to show this, we must show that for some sufficiently large k and i ≥ k, f(i) ≤ ci. Fortunately, c = 5 makes this trivial.
Next, I'll prove that for all f ∈ O(n) that f ∈ O(n * log n). Hence, we must show that for some sufficiently large k, all i ≥ k, f(i) ≤ ci * log i. Hence, if we let k be large enough that f(i) ≤ ci, and i ≥ 2, then the result is trivial since ci ≤ ci * log i.
QED.
Look into the definition of big-O-notation. It means that 5n will run no slower the nlogn, which is true. nlogn is an upper bound of the number of operations to be performed.
You can prove it by applying L'Hopitals rule to lim n-> infinity of 5n/nlogn
g(n) = 5n and f(n)=nlogn
Derivate g(n) and f(n) so you will get something like this
5/(some stuff here that will contain n)
5/infinity = 0 so 5n = O(nlogn) is true
I don't remember the wording of the formal definition, but what you have to show is:
c1 * 5 * n < c2 * n * logn, n>c3
where c1 and c2 are arbitrary constants, for some number c3. Define c3 in terms of c1 and c2, and you're done.
It's been three years since I touched big-O stuff. But I think you can try to show this:
O(5n) = O(n) = O(nlogn)
O(5n) = O(n) is very easy to show, so all you have to do now is to show O(n) = O(nlogn) which shouldn't be too hard too.

Resources