I struggle to fill this table in even though I took calculus recently and good at math. It is only specified in the chapter how to deal with lim(n^k/c^n), but I have no idea how to compare other functions. I checked the solution manual and no info on that, only a table with answers which provides little insight.
When I solve these I don't really think about limits -- I lean on a couple facts and some well-known properties of big-O notation.
Fact 1: for all functions f and g and all exponents p > 0, we have f(n) = O(g(n)) if and only if f(n)p = O(g(n)p), and likewise with o, Ω, ω, and Θ respectively. This has a straightforward proof from the definition; you just have to raise the constant c to the power p as well.
Fact 2: for all exponents ε > 0, the function lg(n) is o(nε). This follows from l'Hôpital's rule for limits: lim lg(n)/nε = lim (lg(e)/n)/(ε nε−1) = (lg(e)/ε) lim n−ε = 0.
Fact 3:
If f(n) ≤ g(n) + O(1), then 2f(n) = O(2g(n)).
If f(n) ≤ g(n) − ω(1), then 2f(n) = o(2g(n)).
If f(n) ≥ g(n) − O(1), then 2f(n) = Ω(2g(n)).
If f(n) ≥ g(n) + ω(1), then 2f(n) = ω(2g(n)).
Fact 4: lg(n!) = Θ(n lg(n)). The proof uses Stirling's approximation.
To solve (a), use Fact 1 to raise both sides to the power of 1/k and apply Fact 2.
To solve (b), rewrite nk = 2lg(n)k and cn = 2lg(c)n, prove that lg(c) n − lg(n) k = ω(1), and apply Fact 3.
(c) is special. nsin(n) ends up anywhere between 0 and n. Since 0 is o(√n) and n is ω(√n), that's a solid row of NO.
To solve (d), observe that n ≥ n/2 + ω(1) and apply Fact 3.
To solve (e), rewrite nlg(c) = 2lg(n)lg(c) = 2lg(c)lg(n) = clg(n).
To solve (f), use Fact 4 and find that lg(n!) = Θ(n lg(n)) = lg(nn).
Related
T(n)= O(f(n)), G(n)= O(h(n))
How would I prove or disprove:
T(G(n))= O(h(f(n))
I think this is false, because it should be O(f(h(n))) instead of O(h(f(n))), since G is applied before T is applied, I tried substituting polynomial functions for T and G, I think the order matters, (n^2)! is not equal to (n!)^2 , but I am not sure if this reasoning is correct?
You are correct, this is false, however I did not quite understand your counterexample.
A counterexample would be to take a function and its inverse, while keeping h very small asymptotically:
T(n) = 2^n , f(n) = 2^n This is compliant with the given fact that 2^n = O(2^n)
And
G(n) = lg(n!) , h(n) = n lg(n) This is also compliant with the given fact that lg(n!) < O(lg n^n) = O(n lg n)
However, T(G(n)) = 2^(lg(n!)) = n! and h(f(n)) = 2^n lg(2^n) = n*2^n but, n! =/= O(n*2^n) as we have a factorial function vs an exponent function (multiplied by a linear function) thus we proved it is not true.
The reason n! =/= O(n 2^n) is because: n*2^n < 2^n * 2^n = 4^n and we know that a factorial function 'beats' an exponent.
My textbook describes the relationship as follows:
There is a very nice mathematical intuition which describes these classes too. Suppose we have an algorithm which has running time N0 when given an input of size n, and a running time of N1 on an input of size 2n. We can characterize the rates of growth in terms of the relationship between N0 and N1:
Big-Oh Relationship
O(log n) N1 ≈ N0 + c
O(n) N1 ≈ 2N0
O(n²) N1 ≈ 4N0
O(2ⁿ) N1 ≈ (N0)²
Why is this?
That is because if f(n) is in O(g(n)) then it can be thought of as acting like k * g(n) for some k.
So for example if f(n) = O(log(n)) then it acts like k log(n), and now f(2n) ≈ k log(2n) = k (log(2) + log(n)) = k log(2) + k log(n) ≈ k log(2) + f(n) and that is your desired equation with c = k log(2).
Note that this is a rough intuition only. An example of where it breaks down is that f(n) = (2 + sin(n)) log(n) = O(log(n)). The oscillating 2 + sin(n) bit means that f(2n)-f(n) can be basically anything.
I personally find this kind of rough intuition to be misleading and therefore worse than useless. Others find it very helpful. Decide for yourself how much weight you give it.
Basically what they are trying to show is just basic algebra after substituting 2n for n in the functions.
O(log n)
log(2n) = log(2) + log(n)
N1 ≈ c + N0
O(n)
2n = 2(n)
N1 ≈ 2N0
O(n²)
(2n)^2 = 4n^2 = 4(n^2)
N1 ≈ 4N0
O(2ⁿ)
2^(2n) = 2^(n*2) = (2^n)^2
N1 ≈ (N0)²
Since O(f(n)) ~ k * f(n) (almost by definition), you want to look at what happens when you put 2n in for n. In each case:
N1 ≈ k*log 2n = k*(log 2 + log n) = k*log n + k*log 2 ≈ N0 + c where c = k*log 2
N1 ≈ k*(2n) = 2*k*n ≈ 2N0
N1 ≈ k*(2n)^2 = 4*k*n^2 ≈ 4N0
N1 ≈ k*2^(2n) = k*(2^n)^2 ≈ N0*2^n ≈ N0^2/k
So the last one is not quite right, anyway. Keep in mind that these relationships are only true asymptotically, so the approximations will be more accurate as n gets larger. Also, f(n) = O(g(n)) only means g(n) is an upper bound for f(n) for large enough n. So f(n) = O(g(n)) does not necessarily mean f(n) ~ k*g(n). Ideally, you want that to be true, since your big-O bound will be tight when that is the case.
According to this book, big O means:
f(n) = O(g(n)) means c · g(n) is an upper bound on f(n). Thus there exists some constant c such that f(n) is always ≤ c · g(n), for large enough n (i.e. , n ≥ n0 for some constant n0).
I have trubble understanding the following big O equation
3n2 − 100n + 6 = O(n2), because I choose c = 3 and 3n2 > 3n2 − 100n + 6;
How can 3 be a factor? In 3n2 − 100n + 6, if we drop the low order terms -100n and 6, aren't 3n2 and 3.n2 the same? How to solve this equation?
I'll take the liberty to slightly paraphrase the question to:
Why do and have the same asymptotic complexity.
For that to be true, the definition should be in effect both directions.
First:
let
Then for the inequality is always satisfied.
The other way around:
let
We have a parabola opened upwards, therefore there is again some after which the inequality is always satisfied.
Let's look at the definition you posted for f(n) in O(g(n)):
f(n) = O(g(n)) means c · g(n) is an upper bound on f(n). Thus there
exists some constant c such that f(n) is always ≤ c · g(n), for
large enough n (i.e. , n ≥ n0 for some constant n0).
So, we only need to find one set of constants (c, n0) that fulfils
f(n) < c · g(n), for all n > n0, (+)
but this set is not unique. I.e., the problem of finding the constants (c, n0) such that (+) holds is degenerate. In fact, if any such pair of constants exists, there will exist an infinite amount of different such pairs.
Note that here I've switched to strict inequalities, which is really only a matter of taste, but I prefer this latter convention. Now, we can re-state the Big-O definition in possibly more easy-to-understand terms:
... we can say that f(n) is O(g(n)) if we can find a constant c such
that f(n) is less than c·g(n) or all n larger than n0, i.e., for all
n>n0.
Now, let's look at your function f(n)
f(n) = 3n^2 - 100n + 6 (*)
Let's describe your functions as a sum of it's highest term and another functions
f(n) = 3n^2 + h(n) (**)
h(n) = 6 - 100n (***)
We now study the behaviour of h(n) and f(n), respectively:
h(n) = 6 - 100n
what can we say about this expression?
=> if n > 6/100, then h(n) < 0, since 6 - 100*(6/100) = 0
=> h(n) < 0, given n > 6/100 (i)
f(n) = 3n^2 + h(n)
what can we say about this expression, given (i)?
=> if n > 6/100, the f(n) = 3n^2 + h(n) < 3n^2
=> f(n) < c*n^2, with c=3, given n > 6/100 (ii)
Ok!
From (ii) we can choose constant c=3, given that we choose the other constant n0 as larger than 6/100. Lets choose the first integer that fulfils this: n0=1.
Hence, we've shown that (+) golds for constant set **(c,n0) = (3,1), and subsequently, f(n) is in O(n^2).
For a reference on asymptotic behaviour, see e.g.
https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/big-o-notation
y=3n^2 (top graph) vs y=3n^2 - 100n + 6
Consider the sketch above. By your definition, 3n^2 only needs to be bigger than 3n^2 - 100n + 6 for large enough n (i.e. , n ≥ n0 for some constant n0). Let that n0 = 5 in this case (it could be something a little smaller, but it's clear which graph is bigger by n=5 so we'll just go with that).
Clearly from the graph, 3n^2 >= 3n^2 - 100n + 6 in the range we've plotted. The only way for 3n^2 - 100n + 6 to get bigger than 3n^2 then is for it to grow more steeply.
But the gradients of 3n^2 and 3n^2 - 100n + 6 are 6n and 6n - 100 respectively, so 3n^2 - 100n + 6 can't grow more steeply, therefore must always be underneath.
So your definition holds - 3n^2 - 100n + 6 <= 3n^2 for all n>=5
I am not an expert, but this looks a lot similar to what we just had in our real analysis course.
Basically if you have something like f(n) = 3n^2 − 100n + 6, the "fastest growing" term "wins" the other terms, when you have really really big n.
So in this case 3n^2 surpasses what ever 100n is, when the n is really big.
Another example would be something like f(n) = n/n^2 or f(n) = n! * n^2.
The first one gets smaller, as n simply cannot "keep up" with n^2. In the second example n! clearly grows faster than n^2, so I guess the answer for that should be f(n) = n! then, because the n^2 technically stops mattering with big n.
And terms like +6, which have no n affecting them are constants and matter even less as they cannot grow even if n grows.
It is all about what happends when n is really big. If your n is 34934854385754385463543856, then n^2 is hell of a bigger than 100n, because n^2 = n * n = 34934854385754385463543856 * 34934854385754385463543856.
I came across two asymptotic function proofs.
f(n) = O(g(n)) implies 2^f(n) = O(2^g(n))
Given: f(n) ≤ C1 g(n)
So, 2^f(n) ≤ 2^C1 g(n) --(i)
Now, 2^f(n) = O(2^g(n)) → 2^f(n) ≤ C2 2^g(n) --(ii)
From,(i) we find that (ii) will be true.
Hence 2^f(n) = O(2^g(n)) is TRUE.
Can you tell me if this proof is right? Is there any other way to solve this?
2.f(n) = O((f(n))^2)
How to prove the second example? Here I consider two cases one is if f(n)<1 and other is f(n)>1.
Note: None of them are homework questions.
The attempted-proof for example 1 looks well-intentioned but is flawed. First, “2^f(n) ≤ 2^C1 g(n)” means 2^f(n) ≤ (2^C1)*g(n), which in general is false. It should have been written 2^f(n) ≤ 2^(C1*g(n)). In the line beginning with “Now”, you should explicitly say C2 = 2^C1. The claim “(ii) will be true” is vacuous (there is no (ii)).
A function like f(n) = 1/n disproves the claim in example 2 because there are no constants N and C such that for all n > N, f(n) < C*(f(n))². Proof: Let some N and C be given. Choose n>N, n>C. f(n) = 1/n = n*(1/n²) > C*(1/n²) = C*(f(n))². Because N and C were arbitrarily chosen, this shows that there are no fixed values of N and C such that for all n > N, f(n) < C*(f(n))², QED.
Saying that “f(n) ≥ 1” is not enough to allow proving the second claim; but if you write “f(n) ≥ 1 for all n” or “f() ≥ 1” it is provable. For example, if f(n) = 1/n for odd n and 1+n for even n, we have f(n) > 1 for even n > 0, and less than 1 for odd n. To prove that f(n) = O((f(n))²) is false, use the same proof as in the previous paragraph but with the additional provision that n is even.
Actually, “f(n) ≥ 1 for all n” is stronger than necessary to ensure f(n) = O((f(n))²). Let ε be any fixed positive value. No matter how small ε is, “f(n) ≥ ε for all n > N'” ensures f(n) = O((f(n))²). To prove this, take C = max(1, 1/ε) and N=N'.
I am trying to get the correct Big-O of the following code snippet:
s = 0
for x in seq:
for y in seq:
s += x*y
for z in seq:
for w in seq:
s += x-w
According to the book I got this example from (Python Algorithms), they explain it like this:
The z-loop is run for a linear number of iterations, and
it contains a linear loop, so the total complexity there is quadratic, or Θ(n2). The y-loop is clearly Θ(n).
This means that the code block inside the x-loop is Θ(n + n2). This entire block is executed for each
round of the x-loop, which is run n times. We use our multiplication rule and get Θ(n(n + n2)) = Θ(n2 + n3)
= Θ(n3), that is, cubic.
What I don't understand is: how could O(n(n+n2)) become O(n3). Is the math correct?
The math being done here is as follows. When you say O(n(n + n2)), that's equivalent to saying O(n2 + n3) by simply distributing the n throughout the product.
The reason that O(n2 + n3) = O(n3) follows from the formal definition of big-O notation, which is as follows:
A function f(n) = O(g(n)) iff there exists constants n0 and c such that for any n ≥ n0, |f(n)| ≤ c|g(n)|.
Informally, this says that as n gets arbitrary large, f(n) is bounded from above by a constant multiple of g(n).
To formally prove that n2 + n3 is O(n3), consider any n ≥ 1. Then we have that
n2 + n3 ≤ n3 + n3 = 2n3
So we have that n2 + n3 = O(n3), with n0 = 1 and c = 2. Consequently, we have that
O(n(n + n2)) = O(n2 + n3) = O(n3).
To be truly formal about this, we would need to show that if f(n) = O(g(n)) and g(n) = O(h(n)), then f(n) = O(h(n)). Let's walk through a proof of this. If f(n) = O(g(n)), there are constants n0 and c such that for n ≥ n0, |f(n)| ≤ c|g(n)|. Similarly, since g(n) = O(h(n)), there are constants n'0, c' such that for n ≥ n'0, g(n) ≤ c'|h(n)|. So this means that for any n ≥ max(c, c'), we have that
|f(n)| ≤ c|g(n)| ≤ c|c'h(n)| = c x c' |h(n)|
And so f(n) = O(h(n)).
To be a bit more precise - in the case of the algorithm described here, the authors are saying that the runtime is Θ(n3), which is a stronger result than saying that the runtime is O(n3). Θ notation indicates a tight asymptotic bound, meaning that the runtime grows at the same rate as n3, not just that it is bounded from above by some multiple of n3. To prove this, you would also need to show that n3 is O(n2 + n3). I'll leave this as an exercise to the reader. :-)
More generally, if you have any polynomial of order k, that polynomial is O(nk) using a similar argument. To see this, let P(n) = ∑i=0k(aini). Then, for any n ≥ 1, we have that
∑i=0k(aini) ≤ ∑i=0k(aink) = (∑i=0k(ai))nk
so P(n) = O(nk).
Hope this helps!
n(n+n2) == n2 + n3
Big-O notation only cares about the dominant term as n goes to infinity, so the whole algorithm is thought of as Θ(n3).
O(n(n+n^2)) = O(n^2 + n^3)
Since the n^3 term dominates the n^2 term, the n^2 term is negligible and thus it is O(n^3).
The y loop can be discounted because of the z loop (O(n) + O(n^2) -> O(n^2))
Forget the arithmetic.
Then you're left with three nested loops that all iterate over the full length of 'seq', so it's O(n^3)