In big-O notation is O((log n)^k) = O(log n), where k is some constant (e.g. the number of logarithmic for loops), true?
I was told by my professor that this statement was true, however he said it will be proved later in the course. I was wondering if any of you could demonstrate its validity or have a link where I could confirm if it is true.
(1) It is true that O(log(n^k)) = O(log n).
(2) It is false that O(log^k(n)) (also written O((log n)^k)) = O(log n).
Observation: (1) has been proven by nmjohn.
Exercise: prove (2). (Hint: f(n) = log^2 n is O(log^2 n). Is it O(log n)? What is a sufficiently large constant c such that, for all n greater than n0, c log n > log^2 n?)
EDIT:
On a related note, anybody who finds this question helpful and/or interesting should go show some love for the new "Computer Science" StackExchange site. Here's a link. Go make this new place a reality!
http://area51.stackexchange.com/proposals/35636/computer-science-non-programming?referrer=rpnXA1_2BNYzXN85c5ibxQ2
Are you sure he didn't mean O(log n^k), because that equals O(k*log n) = k*O(log n) = O(log n).
O(log n) is a class of functions. You cannot perform computations such as ^k on it. Thus, the term O(log n)^k does not even look sensible to me.
Related
I have been stuck on a problem for a while, and I would like to know if g(n) = O(log n), does g(n^2) = O(log n^2)? I would like to know whether I can apply this method to solve the aforementioned problem.
The problem in question (for context only):
If f(n) = n * g(n^2) and g(n) = O(log n), prove that f(n) = O(n log n).
The answer is yes: n is just a number, and so is n2. Imagine if you call a = n2, then isn't g(a) still O(log(a))?
Your confusion is that in fact, O(log(n2)) = O(2 log n) = O(log n). This follows from the properties of the logarithm.
To illustrate, for your specific problem, we have that: If
g(n) = O(log n)
then
g(n2) = O(log(n2)) = O(2 log n)) = O(log n).
And since
f(n) = n * g(n2)
then clearly
f(n) = O(n log n). □
(The amount of rigor your proof requires would depend on the context of your course or book.)
For example, the tightest bound for Binary search is θ(logn), but we can also say it has O(n^2) and Ω(1).
However, I'm confused about if we can say something like "Binary search has a θ(n) bound" since θ(n) is between O(n^2) and Ω(1)?
The worst-case execution of binary search on an array of size n uses Θ(log n) operations.
Any execution of binary search on an array of size n uses O(log n) operations.
Some "lucky" executions of binary search on an array of size n use O(1) operations.
The sentence "The complexity of binary search has a Θ(n) bound" is so ambiguous and misleading that most people would call it false. In general, I advise you not to use the word "bound" in the same sentence as one of the notations O( ), Θ( ), Ω( ).
It is true that log n < n.
It is false that log n = Θ(n).
The statement log n < Θ(n) is technically true, but so misleading that you should never write it.
It is true that log n = O(n).
The "because" is wrong. Θ(n) is indeed compatible with O(n²) and Ω(1), but so is Θ(log n).
In the case of the dichotomic search, you can establish both bounds O(log n) and Ω(log n), which is tight, and summarized by Θ(log n).
You may not choose complexities "randomly", you have to prove them.
True or false? ∀f[ f = Ω(n^2) ∧ f = O(n^3) ⇒ f = Θ(n^2) ∨ f = Θ(n^3)]
No, the statement means that the best case is Theta(n^2) and the worst case is Theta(n^3). If that statement is true, it would mean that the average case could never be different than the best or worst case, which isn't true.
See, for example, a BST, where the best case is Omega(1), average case is O(log n), and worst case is O(n). (Note that some people will disagree with me and list O(log n) as the best case, but I disagree with this).
In this case, the average case doesn't have to be either Theta(n^2) or Theta(n^3) because it could be Theta(n^2 log n), which is greater than Theta(n^2) and less than Theta(n^3). I can't think of an algorithm that actually is that offhand but you could at least imagine an algorithm (however contrived) that actually is.
If I have an algorithm that runs in log(n^(5/4)!) time, how can I represent this as something log(n)? Is it just I know that log(n!) would be asymptotically equal to nlog(n) but does the (5/4) change anything, and if it does how so?
Good question! As you noted log(n!) = O(n log n). From this it follows that
log(n^{5/4}!) = O(n^{5/4} log n^{5/4}) = O(n^{5/4} log n)
The last equality follows because log n^{5/4} = (5/4)*log n.
So the you can simplify the expression to O(n^{5/4} log n).
The answer is yes, the factor 5/4 in the exponent matters: the function n^{5/4} grows asymptotically fast than n so you can't ignore it. (This follows from the fact that n^{5/4}/n = n^{1/4}, for example.)
In my homework, the question asks to determine the asymptotic complexity of n^.99999*log(n). I figured that it would be closer to O( n log n) but the answer key suggests that when c > 0, log n = O(n). I'm not quite sure why that is, could someone provide an explanation?
It's also true that lg n = O( nk ) (in fact, it is o(nk); did the hint actually say that, perhaps?) for any constant k, not just 1. Now consider k=0.00001. Then n0.99999 lg n = O(n0.99999 n0.00001 ) = O(n). Note that this bound is not tight, since I could choose an even smaller k, so it's perfectly fine to say that n0.99999 lg n is O(n0.99999 lg n), just as we say n lg n is O(n lg n).