algorithms math - how do I solve and substitute? - algorithm

I can't understand the basic math behind algorithms. For example, here's a question:
If
f(n) = O(g(n))
is
f(n) * log(f(n)^c) = O(g(n) * log(g(n)))
?
How do I go about answering this question? From what I understand so far, f(n) = O(g(n)) only when g(n) <= c(g(n)) and c and n are non-negative. So I need to start plugging values into the above based on that, but how do I do that? Say if I chose c=5 and n=2, would I plug the values like so: f(2) * log(f(2)^5) = 5(g(2) * log(g(2))) Would that mean that the answer to the original question is false?

By f(n) = O(g(n)) , you mean
there exists a k such that f(n) <= k.g(n) for some n >= N_1. -----1
which implies log(f(n)^c) <= log(k^c) + log(g(n)^c) <= K*log(g(n)^c) for n >= N_2 and for K= max{log(k^c),2} -----2
which gives us the the required answer by multiplication of 1& 2
f(n) * log(f(n)^c) = O(g(n) * log(g(n))).

You can do it like that:
f(n) * log(f(n)^c) = c * f(n) * log(f(n)) = O(1) * O(g(n)) * log(O(g(n))) = O(1) * O(g(n)) * O(log(g(n))) = O(g(n) * log(g(n)).
So the answer to the question is true. All what you need here is properties of logarithm function.
There is one step that can be not clear here: why is log(O(g(n))) = O(log(g(n)))?
Proof: if f(n) = O(g(n)), there is a constant C such that for sufficiently large n
f(n) <= C * g(n). Thus, log(f(n)) <= log(C * g(n)) = log(g(n)) + log(C). We can use C2 = 2 and obtain log(f(n)) <= C2 * log(g(n)) for sufficiently large n.

You're given f(n) = O(g(n)) which means that beyond some point f(n) is at most a fixed positive multiple of g(n). You don't know where that point is, and you don't know the positive multiple. Still you know that
(⋆) f(n) ≤ k * g(n) for n > n₀ for some k > 0
even though k and n₀ are unknown. (I used k to avoid a name collision with c in the second part of the problem.)
Now you're asked to show that f(n) * log(f(n)^c) = O(g(n) * log(g(n))).
Logarithms turn multiplication into addition: log(x * y) = log(x) + log(y). That said, unsurprisingly, they turn repeated multiplication (which is exponentiation) into repeated addition (which is multiplication): log(x^y)= y * log(x).
First notice that log(f(n)^c) = c log(f(n)) because log(x^y) = y * log(x). So you may rewrite the problem: show c * f(n) * log(f(n)) = O(g(n) * log(g(n))).
What's more you can abandon the c on the left hand side: if something is at most a constant times something else (big-O), then any multiple of it is at most some other constant multiplied by the something else. Again you can rewrite the problem: show f(n) * log(f(n)) = O(g(n) * log(g(n))).
Now take the logarithm of (⋆):
(⋆) f(n) ≤ k * g(n) for n > n₀ for some k > 0
(⋆⋆) log(f(n)) ≤ log(k) + log(g(n)) for n > n₀ for some k > 0
The second only follows because log is an increasing function, so if numbers increase their logarithms increase --- taking the logarithm preserves order.
Now you need to multiply these two together to get something like the answer. Again this multiplication preserves order. If a ≤ A and b ≤ B then ab ≤ AB.
(⋆⋆⋆) f(n) * log(f(n)) ≤ k * g(n) * log(g(n)) + k * log(k) * g(n)
--- for n > n₀ for some k > 0
Now the left hand side is as in the problem, so you need to show the right hand side is at most some multiple of g(n) * log(g(n)) to finish up.
The first term on the right is exactly the sort of thing you need, but the second term is not a multiple of g(n) * log(g(n)), yet it increases the right hand side if log(k)>0, so you can't just throw it away: doing that would decrease the right hand side, and the left hand side might no longer be at most the right hand side.
(Of course if log(k)≤0 you can just throw the second term away which increases the right hand side and you're done: there's a multiple of g(n) * log(g(n)) there which is what you want.)
Instead what you need to do is increase k * log(k) * g(n) into another multiple of g(n) * log(g(n)). Increasing it means the left hand side is still at most the right hand side, and this makes the whole of the right hand side be a multiple of g(n) * log(g(n)) and the proof will be complete.
To do this, you need to multiply k * log(k) * g(n) by log(g(n)). Provided log(g(n)) ≥ 1 (again beyond some point), which seems reasonable in the theory of algorithms, this will make k * log(k) * g(n) bigger. (I could fuss here, but I won't!) So you may say that
(⋆⋆⋆) f(n) * log(f(n)) ≤ k * g(n) * log(g(n)) + k * log(k) * g(n)
--- for n > n₀ for some k > 0
But log(g(n)) ≥ 1 for n > n₁ and so multiplying the second term by log(g(n)) does not decrease it, so
f(n) * log(f(n)) ≤ k * g(n) * log(g(n)) + k * log(k) * g(n) * log(g(n))
--- for n > max(n₀,n₁) for some k > 0
and simplifying the right hand side
f(n) * log(f(n)) ≤ k(1+log(k)) * g(n) * log(g(n))
--- for n > max(n₀,n₁) for some k > 0
just gives a multiple of g(n) * log(g(n)) because k is a constant. i.e. take c = k(1+log(k)) to be the positive constant you need. (Earlier you eliminated the case where log(k) ≤ 0.) So
f(n) * log(f(n)) = O(g(n) * log(g(n)))

Related

Asymptotic Bounding | If f = 2^n and g = (2^n+1), how does f = Θ(g)?

We are asked to indicate whether f = O(g), or f = Ω(g), or both (in which case f = Θ(g)).
To solve the big O, I found it easy by simply providing constant C = 1 in which case 2^n <= 1(2^n+1).
I was under the impression that to solve the Ω would be impossible since there is no C in which 2^n >= C(2^n+1).
Upon looking into the solutions to check my work, I found that f = Θ(g). How could this be with this problem? What constant C could satisfy this?
What is the problem if C = 0.1 as an example? Also, you can show the Theta notation by the limit of these two functions such that lim(f(n)/g(n)) when n goes to \infty is equal 1. It means we have f(n) = \Theta(g(n)).
I can't tell whether g(n) is 2^(n+1) or (2^n)+1. In either case, f(n) is Theta(g(n)).
Assume g(n) = 2^(n+1). We can rewrite this using laws of exponents as (2^n)(2^1) which is the same thing as 2*(2^n). Now we may simply choose c = 1/2 and then f(n) = c * g(n). Because there exists a c for which the functions are simply equal, immediately f(n) = Theta(g(n)).
Assume g(n) = (2^n) + 1. Since f(n) < g(n) we immediately have f(n) = O(g(n)). All we need to show is that f(n) >= c * g(n). If we choose c = 1/2 we need to find n0 such that 2^n0 >= 1/2 2^n0 + 1/2. We can guess n0 = 1 and we find that this works; 2^1 >= 1/2 2^1 + 1/2. Because f(n) also grows faster than (1/2)2^n + 1/2, we are done.

Two questions regarding Big-O,Theta and Omega notation

Prove or disprove the following claims:
Exist function f(n) so f(n-k) is not equal to Big-theta (f(n)). when k>=1 and is positive constant.
Is there any function which this claim is true?
I thought about f(n)=n! but I'm not sure that's is correct answer.
Moreover, if f(n)=n! is correct, how this claim can be proved?
Exist function so (f(n))^2=Big-O(f(n)) and f(n)=Big-omega (log(log(n))).
I think there is not function which make the claim to be true.
If this is correct - how it could be proved?
Correct for f(n) = n!. It suffices to show that for any fixed k >= 1, (n - k)! is not Omega(n!), as for any constant c > 0, it holds for all n large enough that c * n! > (n - k)!.
There is no f(n) such that both f(n)^2 = O(f(n)) and f(n) = Omega(log log n). The latter implies that for some constant c > 0 and all n large enough, f(n) > c log log n, and in particular f(n) > 1 for all n large enough. If we now assume that f(n)^2 = O(f(n)), then there exists a constant r > 0 so that for all n large enough, f(n)^2 < r * f(n), namely f(n) < r. But this implies that log log n < (r / c) for all n large enough, which is false for all n > e^(e^(r / c)) (where e is the basis of log).

simple g(n) such that f(n) = Θ(g(n))

f(n) = 4 * 2n + 4n + 20n5
So, g(n) = 4n
Now our f(n) = O(g(n))
4 * 2n + 4n + 20n5 ≤ c*4n
How do we do this? I know how to do it for simple cases, but this one is far more complex. Would it go along the lines of removing the constant 4 and 20n5 to then have 2n + 4n ≤ c*4n?
Or would it be for any c > 4*2n + 20n5. It feels like a lame answer, so i'm going to assume i'm wrong. Would prefer if someone hinted at the idea of how to solve these problems rather than give me the answer, thank you.
Hint / preparations
In the context of asymptotic analysis and, in this case, Big-O notation specifically; generally when wanting to prove that inequalities such as
4 * 2^n + 4^n + 20n^5 ≤ c*4^n, (+)
for some constant c > 0,
for n larger than some constant n0; n > n0
holds, we approach the left hand side expression term by term. Since we're free to choose any constants c and n0 to show that (+) holds, we can always express the lower order terms as less or equal to (≤) the higher order term by making n sufficiently large, e.g., choosing the value of n0 as we see fit.
Solution (spoilers ahead!)
Below follows one way to show that (+) holds for some set of positive constants c and n0. Since you only asked for hints, I suggest you start with the section above, and return to this section in case you get stuck or want to verify the derivation you ended up using.
Term by term analysis (in terms of 4^n) of the left hand side expression of(+)` follows.
Term 4 * 2^n:
4 * 2^n = 4^n <=> (2*2)*2^n = (2^2)^n <=> 2^(n+2) = 2^(2n)
<=> n+2 = 2n => n = 2
=> 4 * 2^n ≤ 4^n for n ≥ 2 (i)
Term 4^n: Trivial
Term 20n^5:
for which n is 20 * n^5 = 4^n?
Graphical solution:
=> 20 * n^5 ≤ 4^n for n ≥~ 10.7 (choose 11) (ii)
Inserting inequalities (i) and (ii) in the lhs of (+) yields:
4 * 2^n + 4^n + 20n^5 ≤ 4^n + 4^n + 4^n = 3*4^n
^
for n>max(2,11)=11 <-- choice of n0 |
choice of c
Hence, we have showed that (+) holds for constants n0 = 11 and c=3. Naturally, the choice of these constants is not unique (in fact, if such constants exists, an infinite amount of them exists). Subsequently, the lhs of (+) is in O(4^n).
Now, I note that your title mentions Big-Θ (whereas your question covers only Big-O). For deriving that lhs of (+) is Θ(4^n), we need to find also a lower asymptotic bound on the lhs of (+) in terms of 4^n. Since n > 0, this is, in this case, quite trivial:
4 * 2^n + 4^n + 20n^5 ≥ c2*4^n ? for n > n0 ? (++)
=> 4 * 2^n + 4^n + 20n^5 ≥ 4^n, for n > 0
I.e., in addition to showing that (+) holds (which implies O(4^n)), we've shown that (++) holds for e.g. c2 = 1 and (re-use) n0 = 11, which implies that lhs of (+) is Θ(4^n).
One way to approach an asymptotic analysis of a function such as the left hand side of (+) would be to make use of the somewhat rigorous term-by-term analysis shown in this solution. In practice, however, we know that 4^n will quickly dominate the lower order terms, so we could've just chosen a somewhat large n0 (say 100) and tested, term by term, if the lower order terms could be replaced by the higher order term with less or equal to (≤) relation, given n>n0. Or, given in what context we need to make use of our asymptotic bounds, we could just glance at the function and, without rigour, directly state that the asymptotic behaviour of the function is naturally O(4^n), due to this being the dominant term. This latter method should, imo, only be used after one has grasped how to formally analyse the asymptotic behaviour of functions and algorithms in the context of Big-O/-Omega and -Theta notation.
The formal definition is
O(g(n)) = {f(n) | ∃c, n₀ : 0 ≤ f(n) ≤ c g(n), ∀ n ≥ n₀}
But when you want to check if f(x) ∈ O(g(n)), you can use the simpler
f(n)
lim sup ────── ≤ c
n → ∞ g(n)
In this case,
4*2ⁿ + 4ⁿ + 20n⁵
lim sup ────────────────── = 1
n → ∞ 4ⁿ
So yes, we can choose for example c = 1.

(Beginner) Questions about the Big O notation

I have some questions related to the Big O notation:
n^3 = Big Omega(n^2)
This is true because:
n^3 >= c * n^2 for all n >= n0
-> Lim(n-> infinity) = n^3/n^2 >= c
Then I used L'Hospital and got 6n/2 >= c which is true if I for example choose c as 1 and n0 as 1
Are my thoughts right on this one ?
Now I got two pairs:
log n and n/log n, do they lie in Theta, O or somewhere else ? Just tell me where they lie, then I can do the proof by myself.
n^(log n) and 2^n follows vice versa
And at last:
f(n) = O(n) -> f(n)^2 = O(n^2)
f(n)g(n) = O(f(n)g(n))
The question is: Are these statements correct ?
I would say yes to the first one, I don't really know why and it seems like there is a hidden trick to this, but I don't really know, could someone help me out here ?
The second one should be true if g(n) lies in O(n) ,but I don't really know here either.
Seems like you're right here.
As for log(n) and n/log(n) you can check it by finding the lim log(n)/(n/log(n)) and vice versa.
Using the fact that a^b = e^(b*ln(a)):
n^log(n) = e^(log(n) * log(n)) < e^(n^2) = e^e^(2*log(n)) < (e^e)^(2*n) = O(C^n), and 2^n is also Big O(C^n).
Let's use the definition and some properties of the Big O(f):
O(f) = f * O(1)
O(1) * O(1) = O(1)
Now we have:
f(n)^2 = f(n) * f(n) = O(n) * O(n) = n * O(1) * n * O(1) = n^2 * O(1) = O(n^2).
f(n)g(n) = f(n)g(n) * O(1) = O(f(n)g(n)).
So, yes, it is correct.

Solving Recurrence relation: T(n) = 3T(n/5) + lgn * lgn

Consider the following recurrence
T(n) = 3T(n/5) + lgn * lgn
What is the value of T(n)?
(A) Theta(n ^ log_5{3})
(B) Theta(n ^ log_3{5})
(c) Theta(n Log n )
(D) Theta( Log n )
Answer is (A)
My Approach :
lgn * lgn = theta(n) since c2lgn < 2*lglgn < c1*lgn for some n>n0
Above inequality is shown in this picture for c2 = 0.1 and c1 = 1
log_5{3} < 1,
Hence by master theorem answer has to be theta(n) and none of the answers match. How to solve this problem??
Your claim that lg n * lg n = Θ(n) is false. Notice that the limit of (lg n)2 / n tends toward 0 as n goes to infinity. You can see this using l'Hopital's rule:
limn → ∞ (lg n)2 / n
= lim n → ∞ 2 lg n / n
= lim n → ∞ 2 / n
= 0
More generally, using similar reasoning, you can prove that lg n = o(nε) for any ε > 0.
Let's try to solve this recurrence using the master theorem. We see that there are three subproblems of size n / 5 each, so we should look at the value of log5 3. Since (lg n)2 = o(nlog5 3), we see that the recursion is bottom-heavy and can conclude that the recurrence solves to O(nlog5 3), which is answer (A) in your list up above.
Hope this helps!
To apply Master Theorem we should check the relation between
nlog5(3) ~= n0.682 and (lg(n))2
Unfortunately lg(n)2 != 2*lg(n): it is lg(n2) that's equal to 2*lg(n)
Also, there is a big difference, in Master Theorem, if f(n) is O(nlogb(a)-ε), or instead Θ(nlogba): if the former holds we can apply case 1, if the latter holds case 2 of the theorem.
With just a glance, it looks highly unlikely (lg(n))2 = Ω(n0.682), so let's try to prove that (lg(n))2 = O(n0.682), i.e.:
∃ n0, c ∈ N+, such that for n>n0, (lg(n))2 < c * n0.682
Let's take the square root of both sides (assuming n > 1, the inequality holds)
lg(n) < c1 * n0.341 , (where c1 = sqrt(c))
now we can assume, that lg(n) = log2(n) (otherwise the multiplicative factor could be absorbed by our constant - as you know constant factors don't matter in asymptotic analysis) and exponentiate both sides:
2lg(n) < 2c2 * n0.341 <=> n < 2c2 * n0.341 <=> n < (n20.341)c2 <=> n < (n20.341)c2 <=> n < (n1.266)c2
which is immediately true choosing c2 = 1 and n0 = 1
Therefore, it does hold true that f(n) = O(nlogb(a)-ε), and we can apply case 1 of the Master Theorem, and conclude that:
T(n) = O(nlog53)
Same result, a bit more formally.

Resources