Combining predicate logic with Big O Notation - algorithm

True or false? ∀f[ f = Ω(n^2) ∧ f = O(n^3) ⇒ f = Θ(n^2) ∨ f = Θ(n^3)]

No, the statement means that the best case is Theta(n^2) and the worst case is Theta(n^3). If that statement is true, it would mean that the average case could never be different than the best or worst case, which isn't true.
See, for example, a BST, where the best case is Omega(1), average case is O(log n), and worst case is O(n). (Note that some people will disagree with me and list O(log n) as the best case, but I disagree with this).
In this case, the average case doesn't have to be either Theta(n^2) or Theta(n^3) because it could be Theta(n^2 log n), which is greater than Theta(n^2) and less than Theta(n^3). I can't think of an algorithm that actually is that offhand but you could at least imagine an algorithm (however contrived) that actually is.

Related

Big O and Big Omega are the same but in reverse?

Is this true?
f(n) = O(g(n)) === g(n) = Omega(f(n))
Basically are they interchangeable because they are opposites?
So if F is in Big O of G, then G is Big Omega of F?
It helps to look at the definitions:
f(n) is in O(g(n)) if and only if:
There are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k.
g(n) is in Omega(f(n)) if and only if:
There are positive constants c and k, such that g(n) ≥ cf(n) ≥ 0 for all n ≥ k.
If you can find values of c and k which make f(n) in O(g(n)), then the same values will also show g(n) to be Omega(f(n)) (just divide both sides of the inequality by c). That's why they are interchangeable.
F(n) is not Theta(g(n)) unless it is both in O(g(n) and Omega(g(n)).
Hope that helps!
I've never particularly cared for the notation myself. The easiest way to think of this is that Big-O notation is the "worst case" and Big Omega is the "best case."
There are, of course, other things you might be interested in. For example, you could state that the following (rather dumb) linear search algorithm is O(n), but it would be more precise to state that it'll always be exactly proportional to n:
public bool Contains(List<int> list, int number)
{
bool contains = false;
foreach (int num in list)
{
// Note that we always loop through the entire list, even if we find it
if (num == number)
contains = true;
}
return contains;
}
We could, alternatively, state that this is both O(n) and Omega(n). For this case, we introduce the notation of Theta(n).
There are other cases too, such as the Average case. The average case will often be the same as either the best or worst case. For example, the best case of a well-implemented linear search is O(1) (if the item you're looking for is the first item on the list), but the worst case is O(n) (because you might have to search through the entire list to discover that the item's not there). If the list contains the item, on average, it'll take n/2 steps to find it (because we will, on average, have to look through half of the list to find the item). Conventionally, we drop the "/2" part and just say that the average case is O(n).
They don't strictly have to be the same though. I've seen some argument about whether the "best" case for a binary search tree search should be considered O(1) (because the item you're looking for might be the first item on the tree) or if it should be considered O(log n) (because the "optimal" case for a binary search is if the tree is perfectly balanced). (See here for a discussion of BST insertion). The average case is definitely O(log n) though. The worst case is O(n) (if the binary search tree is completely unbalanced). If we take the best case to be O(1), the average case to be O(log n), and the worst case to be O(n), then the average, worst, and best case would all obviously be different.

Is the complexity of 3 logn and 2logn same?

DOes it have same complexity since they vary by constant multiplier, or should it be made n^3 and n^2 and be compared?
For 'BigOh' notation, the constant multiplier really doesn't matter. All that it does is, it gives the order of the running time complexity.
You can consider this small example:
Say you have 3 * 100 = 300 apples and 2 * 100 = 200 apples. Surely, 300 != 200, but the order of both are same, that is in order of hundreds.
So by the same means, 3(log n) != 2(log n), but both 3(log n) and 2(log n) are in the order of log n, that is O(log n).
Yes. Multiplying by a constant doesn't matter. The are both just O(log n).
In fact, this is part of the definition of big-o notation. If a function may be bounded by a polynomial in n, then as n tends to infinity, you may disregard lower-order terms of the polynomial.
Both are equivalent to O(log n). The constant does not alter the complexity.
Big O notation is defined by the set of all upper bounded functions.
With that being said, it is also important to note that Big O can be defined mathematically as:
O(g(n)) = {f(n): f(n) < c·g(n), c being some arbitrary constant}
So as you can see, the constant doesn't really matter in Big O; we don't care if there is one that works. So both 3logn and 2logn both can be described as O(logn).

Prove or disprove statements about running times

I'm working through chapter 3 of CLRS, which is about running times and would like to work through some examples. Since I'm not enrolled in an algorithms class I need to resort to the www for help.
1) n^2 = Big-Omega(n^3)
I think this statement is false: if the best case running time is n^3, then the algorithm cannot be n^2, . Even the best case is slower than that.
2) n + log n = Big-Theta (n)
I think this statement is true, we can ignore the lower term of log n. This gives us a worst-case running time of Big-Oh (n). And a best case running time of Big-Omega (n). I'm not quite sure of this though. Some more clarification would be appreciated.
3) n^2 log n =Big-Oh (n^2)
I think this.statement is false: the worst case running time should be n^2 log n.
4) n log n = Big-Oh (n sqrt (n))
Could be true since n log n < n sqrt (n). Not quite sure though.
5) n^2 - 3n - 18 = Big-Theta (n^2)
Really no idea...
6) If f (n) = O (g (n)) and g (n) = O (h (n)), then f (n) = O (h (n)).
Holds by the transitive property.
I hope someone Could elaborate a bit on my quite.possibly wrong answers :)
You are correct, but the reason is not. Remember that Omega(n^3) does not directly relate to an algorithm—but to a function.
The reason why you are correct is because: for each constant c,N, there is some n>N such that n^2 < c * n^3—and thus n^2 is not in Omega(n^3)
You are correct. n < n + logn < 2*n (for large enough n), and thus n + logn is both O(n) and Omega(n)
You are correct, but again, do not use "worst case" in here. The explanation and proof guidelines will be similar to 1.
This is correct since log(n) is asymptotically smaller than sqrt(n) and the rest follows.
Same principle as in 1. It will be true with the same approach.
Correct.
As a side note: Omega(n) does not mean "best case run time of n" it means that the function denoting the complexity (can be worst case complexity, best case complexity or average case complexity,...) holds the conditions for being Omega(n).
For example - Quicksort:
Under the worst case analysis , it is Theta(n^2)
Whereas under the average case analysis it is Theta(nlogn)

Big Oh Notation O((log n)^k) = O(log n)?

In big-O notation is O((log n)^k) = O(log n), where k is some constant (e.g. the number of logarithmic for loops), true?
I was told by my professor that this statement was true, however he said it will be proved later in the course. I was wondering if any of you could demonstrate its validity or have a link where I could confirm if it is true.
(1) It is true that O(log(n^k)) = O(log n).
(2) It is false that O(log^k(n)) (also written O((log n)^k)) = O(log n).
Observation: (1) has been proven by nmjohn.
Exercise: prove (2). (Hint: f(n) = log^2 n is O(log^2 n). Is it O(log n)? What is a sufficiently large constant c such that, for all n greater than n0, c log n > log^2 n?)
EDIT:
On a related note, anybody who finds this question helpful and/or interesting should go show some love for the new "Computer Science" StackExchange site. Here's a link. Go make this new place a reality!
http://area51.stackexchange.com/proposals/35636/computer-science-non-programming?referrer=rpnXA1_2BNYzXN85c5ibxQ2
Are you sure he didn't mean O(log n^k), because that equals O(k*log n) = k*O(log n) = O(log n).
O(log n) is a class of functions. You cannot perform computations such as ^k on it. Thus, the term O(log n)^k does not even look sensible to me.

Asymptotic Complexity of Logarithms and Powers

So, clearly, log(n) is O(n). But, what about (log(n))^2? What about sqrt(n) or log(n)—what bounds what?
There's a family of comparisons like this:
nᵃ (vs.) (log(n))ᵇ
I run into these comparisons a lot, and I've never come up with a good way to solve them. Hints for tactics for solving the general case?
[EDIT: I'm not talking about the computational complexity of calculating the values of these functions. I'm talking about the functions themselves. E.g., f(n) = n is an upper bound on g(n) = log(n) because f(n) ≤ c g(n) for c = 1 and n₀ > 0.]
log(n)^a is always O(n^b), for any positive constants a, b.
Are you looking for a proof? All such problems can be reduced to seeing that log(n) is O(n), by the following trick:
log(n)^a = O(n^b) is equivalent to:
log(n) = O(n^{b/a}), since raising to the 1/a power is an increasing function.
This is equivalent to
log(m^{a/b}) = O(m), by setting m = n^{b/a}.
This is equivalent to log(m) = O(m), since log(m^{a/b}) = (a/b)*log(m).
You can prove that log(n) = O(n) by induction, focusing on the case where n is a power of 2.
log n -- O(log n)
sqrt n -- O(sqrt n)
n^2 -- O(n^2)
(log n)^2 -- O((log n)^2)
n^a versus (log(n))^b
You need either bases or powers the same. So use your math to change n^a to log(n)^(whatever it gets to get this base) or (whatever it gets to get this power)^b. There is no general case
I run into these comparisons a lot (...)
Hints for tactics for solving the general case?
As you as about general case and that you following a lot into such questions. Here is what I recommend :
Use limit definition of BigO notation, once you know:
f(n) = O(g(n)) iff limit (n approaches +inf) f(n)/g(n) exists and is not +inf
You can use Computer Algebra System, for example opensource Maxima, here is in Maxima documentation about limits .
For more detailed info and example - check out THIS answer

Resources