Is the theta bound of an algorithm unique? - algorithm

For example, the tightest bound for Binary search is θ(logn), but we can also say it has O(n^2) and Ω(1).
However, I'm confused about if we can say something like "Binary search has a θ(n) bound" since θ(n) is between O(n^2) and Ω(1)?

The worst-case execution of binary search on an array of size n uses Θ(log n) operations.
Any execution of binary search on an array of size n uses O(log n) operations.
Some "lucky" executions of binary search on an array of size n use O(1) operations.
The sentence "The complexity of binary search has a Θ(n) bound" is so ambiguous and misleading that most people would call it false. In general, I advise you not to use the word "bound" in the same sentence as one of the notations O( ), Θ( ), Ω( ).
It is true that log n < n.
It is false that log n = Θ(n).
The statement log n < Θ(n) is technically true, but so misleading that you should never write it.
It is true that log n = O(n).

The "because" is wrong. Θ(n) is indeed compatible with O(n²) and Ω(1), but so is Θ(log n).
In the case of the dichotomic search, you can establish both bounds O(log n) and Ω(log n), which is tight, and summarized by Θ(log n).
You may not choose complexities "randomly", you have to prove them.

Related

Big-O Analysis - for-if-do something statement with 3 different time complexity

Here is the example
int i;
for(i=0;i<n;i++)
{
if(IsSignificantData(i))
SpecialTreatment(i);
}
IsSignificantData(i) is O(n)
SpecialTreatment(i) is O(n log n)
1.Is the Big-O result n^2? Because it is n*n where the first n is the Big-O for for-loop and anther n is the Big-o for IsSignificantData?
2.Is it in a case like this, always use the worst case in the if statement times the for-loop?
It depends.
With no knowledge about the behavior of IsSignificantData (except that it is O(n)), the most that we can say is that the algorithm is O(n² log n). Because in the worst case, IsSignificantData returns true all the time, and then the algorithm is the same as
for(i=0;i<n;i++)
{
IsSignificantData(i);
SpecialTreatment(i);
}
O(n log n) is of greater order than O(n), so IsSignificantData is basically irrelevant. Then the loop makes it O(n² log n).
The same argument applies if IsSignificantData returns true randomly half the time, or alternating true and false, or one true out of every thousand times -- in any case where the number of trues is proportional to the number of times the function is called, the complexity is O(n² log n).
On the other hand, if IsSignificantData(i) returns true for just one value (or any fixed number of values) of i, the complexity of the algorithm is O(n²). That's because the complexity is the sum of calling IsSignificantData(i) n times (which is O(n²)), plus calling SpecialTreatment once (or some fixed number of times). And SpecialTreatment is O(n log n), which is of lower order than O(n²).
There are other possibilities, too. But all that can be said for sure, with the information given, is that the algorithm is definitely O(n² log n).
Big-O of this code is n*n*n log(n) -> n^3 log(n)
Even though you are using function, your code is actually of the following format:
for(1 to n) {
for ( 1 to n) {
if (true) {
for (1 to n) {
some_log(n) function
}
}
}
}
Having an if statement does not reduce the worst case complexity

Asymptotic Notation Comparison

Is O(logn) = O(2^O(log logn))?
I tried to take the log of both sides
log logn = log2^(log logn)
log logn = log logn log2
We can find a constant C > log2 s.t C log logn > log logn log2
So they are equal to each other. Am I right?
I think what you want to ask is if log n = O(2^(log log n))?
Think of O (big-O) as a <= operator, but the comparison is made asymptotically.
Now, to answer your question, we have to compare log n and 2^(log log n).
We use asymptotic notations only when we need to visualize how much an algorithm will scale as the input grows drastically.
log n is a logarithmic function.
2^(log log n) is an exponential function. (Notice that log log n is the exponent of 2)
It will always be true that a logarithmic function is asymptotically less than an exponential function. If you want to understand, try computing both the functions for very large values of n (like 10000 or 100000000).
So, it can be very easily inferred that log n = O(2^(log log n)).
NOTE: We do not compare asymptotic notations like you asked (O(logn) = O(2^O(log logn))). We compare functions (like log n) using these notations.

Asymptotic analysis with theta notation involving n factorial

If I have an algorithm that runs in log(n^(5/4)!) time, how can I represent this as something log(n)? Is it just I know that log(n!) would be asymptotically equal to nlog(n) but does the (5/4) change anything, and if it does how so?
Good question! As you noted log(n!) = O(n log n). From this it follows that
log(n^{5/4}!) = O(n^{5/4} log n^{5/4}) = O(n^{5/4} log n)
The last equality follows because log n^{5/4} = (5/4)*log n.
So the you can simplify the expression to O(n^{5/4} log n).
The answer is yes, the factor 5/4 in the exponent matters: the function n^{5/4} grows asymptotically fast than n so you can't ignore it. (This follows from the fact that n^{5/4}/n = n^{1/4}, for example.)

Big Oh Notation O((log n)^k) = O(log n)?

In big-O notation is O((log n)^k) = O(log n), where k is some constant (e.g. the number of logarithmic for loops), true?
I was told by my professor that this statement was true, however he said it will be proved later in the course. I was wondering if any of you could demonstrate its validity or have a link where I could confirm if it is true.
(1) It is true that O(log(n^k)) = O(log n).
(2) It is false that O(log^k(n)) (also written O((log n)^k)) = O(log n).
Observation: (1) has been proven by nmjohn.
Exercise: prove (2). (Hint: f(n) = log^2 n is O(log^2 n). Is it O(log n)? What is a sufficiently large constant c such that, for all n greater than n0, c log n > log^2 n?)
EDIT:
On a related note, anybody who finds this question helpful and/or interesting should go show some love for the new "Computer Science" StackExchange site. Here's a link. Go make this new place a reality!
http://area51.stackexchange.com/proposals/35636/computer-science-non-programming?referrer=rpnXA1_2BNYzXN85c5ibxQ2
Are you sure he didn't mean O(log n^k), because that equals O(k*log n) = k*O(log n) = O(log n).
O(log n) is a class of functions. You cannot perform computations such as ^k on it. Thus, the term O(log n)^k does not even look sensible to me.

Is worst case analysis not equal to asymptotic bounds

Can someone explain to me why this is true. I heard a professor mention this is his lecture
The two notions are orthogonal.
You can have worst case asymptotics. If f(n) denotes the worst case time taken by a given algorithm with input n, you can have eg. f(n) = O(n^3) or other asymptotic upper bounds of the worst case time complexity.
Likewise, you can have g(n) = O(n^2 log n) where g(n) is the average time taken by the same algorithm with (say) uniformly distributed (random) inputs of size n.
Or you can have h(n) = O(n) where h(n) is the average time taken by the same algorithm with particularly distributed random inputs of size n (eg. almost sorted sequences for a sorting algorithm).
Asymptotic notation is a "measure". You have to specify what you want to count: worst case, best case, average, etc.
Sometimes, you are interested in stating asymptotic lower bounds of (say) the worst case complexity. Then you write f(n) = Omega(n^2) to state that in the worst case, the complexity is at least n^2. The big-Omega notation is opposite to big-O: f = Omega(g) if and only if g = O(f).
Take quicksort for an example. Each successive recursive call n of quicksort has a run-time complexity T(n) of
T(n) = O(n) + 2 T[ (n-1)/2 ]
in the 'best case' if the unsorted input list is splitted into two equal sublists of size (n-1)/2 in each call. Solving for T(n) gives O(n log n), in this case. If the partition is not perfect, and the two sublists are not of equal size n, i.e.
T(n) = O(n) + T(k) + T(n - 1 - k),
we still obtain O(n log n) even if k=1, just with a larger constant factor. This is because the number of recursive calls of quicksort is rising exponentially while processing the input list as long as k>0.
However, in the 'worst case' no division of the input list takes place, i.e.:
T(n) = O(n) + T(0) + T(n - 1) = O(n) + O(n-1) + T(n-1) + T(n-2) ... .
This happens e.g. if we take the first element of a sorted list as the pivot element.
Here, T(0) means one of the resulting sublists is zero and therefore takes no computing time (since the sublist has zero elements). All the remaining load T(n-1) is needed for the second sublist. In this case, we obtain O(n²).
If an algorithm had no worst case scenario, it would be not only be O[f(n)] but also o[f(n)] (Big-O vs. little-o notation).
The asymptotic bound is the expected behaviour as the number of operations go to infinity. Mathematically it is just that lim as n goes to infinity. The worst case behaviour however is applicable to finite number of operations.

Resources