Recurrance relation: T (n/16) + n log n - algorithm

Can the master theorem be applied?
Or say for T (n) = 2T (n/16) + n log n, how is the master theorem applied here?
I get a = 2, b = 16 and I am not sure about c and k.

To solve such a recurrence relation T(n) = a⋅T(n/b) + f(n), you have to calculate e = logb(a).
Then (for an ε > 0):
f(n) ∈ O(ne - ε) ⇒ T(n) ∈ Θ(ne)
f(n) ∈ Θ(ne) ⇒ T(n) ∈ Θ(ne⋅log(n))
f(n) ∈ Ω(ne + ε) ⇒ T(n) ∈ Θ(f(n))
For more details see Masters Theorem.
So in your case: a = 2, b = 16 ⇒ e = log16(2) = 0.25 holds for case 3,
so T(n) is in Θ(n log n).

Even if the log (n) term was not there the reduction in work per sub-problem at each level dominates (b > a). Hence in my opinion the complexity shall be dictated by the work done at highest level viz O (nlogn).

Related

f = Ω(log n) and g = O(n) is g(n) = O(f (n))

Given two functions f = Ω(log n) and g = O(n), consider the following statements. For
each statement, write whether it is true or false. For each false statement, write two
functions f and g that show a counter-example.
1) g(n) = O(f (n))
2) f (n) = O(g(n))
3) f (n) = Ω(log (g(n)))
4) f (n) = Θ(log (g(n)))
5) f (n) + g(n) = Ω(log n)
I know that Big O means no better than (function), and Big Omega means no worse than (function). But I don't know if that makes the above statement true or false.
False. A counterexample is g(n) = n \in O(n) and f(n) = log(n) \in Omega(log(n)). Both assumptions are correct, but g(n)is not in O(f(n)).
False. A counterexample is g(n) = log(n) \in O(n) and f(n) = n \in Omega(log(n)), but f(n) is not in O(g(n)).
True. As f(n) \in Omega(log(n)), it means lim_{n\to\infty} f(n)/log(n) > 0. As g(n) \in O(n), we can straightforwardly conclude that lim_{n\to\infty} f(n)/log(g(n)) > 0. Hence f(n) \in Omega(log(g(n)).
False. A counterexample is f(n) = n \in Omega(n) and g(n) = log(n) \in O(n), but f(n) is not in theta(g(n)) = theta(log(n)).
True. As f(n) \in Omega(log(n)), it means lim_{n\to\infty} f(n)/log(n) > 0. Hence A = lim_{n\to\infty} (f(n) + g(n))/log(g(n)) = f(n)/log(n) + g(n)/log(n). Hence, as g(n) > 0, A > 0, and it means f(n) + g(n) \in Omega(log(n)).

What is the time complexity of the following expression?

T(n) = 16T(n/4) + n!
I know it can be solved using Master theorem, but I don't know how to handle
f(n) = n!
This is case three of Master Theorem.
Since T(n) = 16T(n/4) + n!
Here f(n) = n!.
a = 16 and b = 4, so logb a = log4 16 = 2.
Master Theorem states that the complexity T(n) = Θ(f(n)) if
c > logb a where f(n) ∈ Ω(nc) .
Since f(n) = n! > nc for some value of n > n0 the statement f(n) ∈ Ω (nc) is true. Thus the statement
c > logb a =2 is also true. Hence by the third case of Master Thoerem the complexity T(n) = Θ(f(n)) = Θ(n!).

which algorithm dominates f(n) or (g(n)

I am doing an introductory course on algorithms. I've come across this problem which I'm unsure about.
I would like to know which of the 2 are dominant
f(n): 100n + log n or g(n): n + (log n)^2
Given the definitions of each of:
Ω, Θ, O
I assumed f(n), so fn = Ω(g(n))
Reason being that n dominates (log n)^2, is that true?
In this case,
limn → ∞[f(n) / g(n)] = 100.
If you go over calculus definitions, this means that, for any ε > 0, there exists some m for which
100 (1 - ε) g(n) ≤ f(n) ≤ 100 (1 + ε) g(n)
for any n > m.
From the definition of Θ, you can infer that these two functions are Θ of each other.
In general, if
limn → ∞[f(n) / g(n)] = c exists, and
0 < c < ∞,
then the two functions have the same order of growth (they are Θ of each other).
n dominates both log(n) and (log n)^2
A little explanation
f(n) = 100n + log n
Here n dominates log n for large values of n.
So f(n) = O(n) .......... [1]
g(n) = n + (log n)^2
Now, (log n)^2 dominates log n.
But n still dominates (log n)^2.
So g(n) = O(n) .......... [2]
Now, taking results [1] and [2] into consideration.
f(n) = Θ(g(n)) and g(n) = Θ(f(n))
since they will grow at the same rate for large values of n.
We can say that f(n) = O(g(n) if there are constants c > 0 and n0 > 0 such that
f(n) <= c*g(n), n > n0
This is the case for both directions:
# c == 100
100n + log n <= 100(n + (log n)^2)
= 100n + 100(log(n)^2) (n > 1)
and
# c == 1
n + (log n)^2 <= 100n + log n (n > 1)
Taken together, we've proved that n + (log n)^2 <= 100n + log n <= 100(n + (log n)^2), which proves that f(n) = Θ(g(n)), which is to say that neither dominates the other. Both functions are Θ(n).
g(n) dominates f(n), or equivalently, g(n) is Ω(f(n)) and the same hold vice versa.
Considering the definition, you see that you can drop the factor 100 in the definition of f(n) (since you can multiply it by any fixed number) and you can drop both addends since they are dominated by the linear n.
The above follows from n is Ω(n + logn) and n is Ω(n + log^2n.
hope that helps,
fricke

Solving Recurrence relation: T(n) = 3T(n/5) + lgn * lgn

Consider the following recurrence
T(n) = 3T(n/5) + lgn * lgn
What is the value of T(n)?
(A) Theta(n ^ log_5{3})
(B) Theta(n ^ log_3{5})
(c) Theta(n Log n )
(D) Theta( Log n )
Answer is (A)
My Approach :
lgn * lgn = theta(n) since c2lgn < 2*lglgn < c1*lgn for some n>n0
Above inequality is shown in this picture for c2 = 0.1 and c1 = 1
log_5{3} < 1,
Hence by master theorem answer has to be theta(n) and none of the answers match. How to solve this problem??
Your claim that lg n * lg n = Θ(n) is false. Notice that the limit of (lg n)2 / n tends toward 0 as n goes to infinity. You can see this using l'Hopital's rule:
limn → ∞ (lg n)2 / n
= lim n → ∞ 2 lg n / n
= lim n → ∞ 2 / n
= 0
More generally, using similar reasoning, you can prove that lg n = o(nε) for any ε > 0.
Let's try to solve this recurrence using the master theorem. We see that there are three subproblems of size n / 5 each, so we should look at the value of log5 3. Since (lg n)2 = o(nlog5 3), we see that the recursion is bottom-heavy and can conclude that the recurrence solves to O(nlog5 3), which is answer (A) in your list up above.
Hope this helps!
To apply Master Theorem we should check the relation between
nlog5(3) ~= n0.682 and (lg(n))2
Unfortunately lg(n)2 != 2*lg(n): it is lg(n2) that's equal to 2*lg(n)
Also, there is a big difference, in Master Theorem, if f(n) is O(nlogb(a)-ε), or instead Θ(nlogba): if the former holds we can apply case 1, if the latter holds case 2 of the theorem.
With just a glance, it looks highly unlikely (lg(n))2 = Ω(n0.682), so let's try to prove that (lg(n))2 = O(n0.682), i.e.:
∃ n0, c ∈ N+, such that for n>n0, (lg(n))2 < c * n0.682
Let's take the square root of both sides (assuming n > 1, the inequality holds)
lg(n) < c1 * n0.341 , (where c1 = sqrt(c))
now we can assume, that lg(n) = log2(n) (otherwise the multiplicative factor could be absorbed by our constant - as you know constant factors don't matter in asymptotic analysis) and exponentiate both sides:
2lg(n) < 2c2 * n0.341 <=> n < 2c2 * n0.341 <=> n < (n20.341)c2 <=> n < (n20.341)c2 <=> n < (n1.266)c2
which is immediately true choosing c2 = 1 and n0 = 1
Therefore, it does hold true that f(n) = O(nlogb(a)-ε), and we can apply case 1 of the Master Theorem, and conclude that:
T(n) = O(nlog53)
Same result, a bit more formally.

Time complexity and Big-O notation specific question

A question in one of my past exams is a multi-choice question:
Choose the FALSE statement: 7(log n) + 5n + n(log log n) + 3n(ln n) is
A. O(n^2)
B. Ω(n^2)
C. O(n log^2 n)
D. Ω(n)
E. Θ(n log n)
First I concluded that the running time of the algorithm had to be Θ(n log n), which ruled out option E. I then concluded that Option B, Ω(n^2), ws false, because I knew that Θ(n log n) was smaller than Θ(n^2), therefore Ω(n^2) could not be true. So I thought B would be the answer.
But I also realised that C couldn't be true either, since Θ(n log n) is a larger running time than Θ(n log^2 n). But it couldn't possibly be 2 of the answers correct.
So which is correct: is it B? or is it C? or neither? I'm so confused. :S
the false statement is omega(n^2).
it is exactly theta(nlogn) (since 3n(ln n)) is the "highest", and it is theta(nlogn).
omega(n^2) says it is not better then n^2 complexity, which is false in here.
addition: in your example the following is true:
7(log n) < 5n < n(log log n) < 3n(ln n)
7logn = theta(logn), 5n = theta(n), n(loglogn) = theta (nloglog(n)), 3nln(n) = theta(nlogn)
which as said, (nlogn) is the highest, thus:
7(log n) + 5n + n(log log n) + 3n(ln n) = theta(nlogn)
and all are o(n^2) (small o here on purpose), so Omega(n^2) is the false statement.
EDIT: clarifying and adding what I've wrote in comments:
option C is True because O(nlogn) < O(nlog^2(n)). mathematically:
nlog^2(n) = n*log(n)*log(n) > n*log(n) for every n>2.
example: for n=1,000,000: nlogn = 1,000,000*20 = 20,000,000
while n*log^2(n) = 1,000,000*20*20=400,000,000.
Assuming that log is the base 2 logarithm and ln is the natural logarithm, the following statement is true:
Θ(log(n))  <  Θ(n·log(log(n)))  <  Θ(n)  <  Θ(n·ln(n))
So the overall complexity is Θ(n·ln(n)).
Now let’s check the statements:
n·ln(n) ∈ O(n2) is true
n·ln(n) ≤ c·‍n2 for c = 1, ‍∀n ≥ 0
n·ln(n) ∈ Ω(n2) is false
n·ln(n) ≱ n2 hence ln(n) < c·*n* for c = 1, ‍∀n ≥ 0
n·ln(n) ∈ O(n·(log(n))2) is true
n·(log(n))2 = n·(ln(n)/ln(2))2 = n·(ln(n))2/(ln(2))2 and n·ln(n) ≤ c·‍n·(ln(n))2/(ln(2))2 for c = (ln(2))2, ∀n ≥ e
n·ln(n) ∈ Ω(n) is true
n·ln(n) ≥ c·‍n for c = 1, ∀n ≥ 0
n·ln(n) ∈ Θ(n·log(n)) is true
n·log(n) = n·ln(n)/ln(2) and n·ln(n) = c·‍n·ln(n)/ln(2) for c = ln(2), ∀n ≥ 0

Resources