I'm really confused about time compexity - algorithm

I understand that an algorithm's time T(n) can be bounded by O(g(n)) by the definition:
T(n) is O(g(n)) iff there is a c > 0, n0 > 0, such that for all n >= n0:
for every input of size n, A takes at most c * g(n) steps.
T(n) is the time that is the longest out of all the inputs of size n.
However what I don't understand is the definition for Ω(g(n)). The definition is that for some input of size n, A takes at least c * g(n) steps.
But if that's the definition for Ω then couldn't I find a lower bound for any algorthm that is the same as the upper bound? For instance if sorting in the worst case takes O(nlogn) then wouldn't I be able to show easily Ω(nlogn) as well seeing as how there has to be at least one bad input for any size n that would take nlogn steps? Lets assume that we're talking about heapsort.
I am really not sure what I'm missing here because whenever I'm being taught a new algorithm the time for a certain method is either Ɵ(g(n)) or O(g(n)), but no explanation is provided as to why it's either Ɵ or O.
I hope what I said was clear enough if not then ask away at what you misunderstood. I really need this confusion cleared up. Thank you.

O is an upper bound, meaning that we know an algorithm that's O(n lg n) takes, asymptotically, at most a constant times n lg n steps in the worst case.
Ω is a lower bound, meaning that we know it's not possible for an Ω(n lg n) algorithm to take asymptotically less than a n lg n steps in the worst case.
Ɵ is a tight bound: for example, if an algorithm is Ɵ(n lg n) then we know both it's both O(n lg n) (so is at least as fast as n lg n) and Ω(n lg n) (so we know it's no faster than n lg n).
The reason your argument is flawed is that you're actually assuming you know Ɵ(n lg n), not just O(n lg n).
For example, we know there's a Ω(n lg n) general bound on comparison sorts. Once we proved O(n lg n) for mergesort, that therefore means that mergesort is Ɵ(n lg n). Note that mergesort is also O(n^2), because it's no slower than n^2. (That's not how people would typically describe it, but that is what the formal notation means.)
For some algorithms, we don't know tight bounds; the general 3SUM problem in simple models of computation is known to be Ω(n lg n) because it can be used to perform sorting, but we only have Ɵ(n^2) algorithms. The best algorithm for the problem is between n lg n and n^2; we can say that it's O(n^2) and Ω(n lg n), but we don't know the Ɵ.
There's also o(f), which means strictly less than f, and ω(f), which means strictly greater than f.

The definition that I am familiar with is that T(n) is Ω(g(n)) if for some n0, for all n>n0, T(n) >= g(n)*k for some k.
Then something is Θ(n) iff it is both O(g(n)) and Ω(g(n)).

Related

Why O(n log n) is greater than O(n)?

I read that O(n log n) is greater than O(n), I need to know why is it so?
For instance taking n as 1, and solving O(n log n) will be O(1 log 1) = O(0). On the same hand O(n) will be O(1)?
Which actually contradicts O(n log n) > O(n)
Let us start by clarifying what is Big O notation in the current context. From (source) one can read:
Big O notation is a mathematical notation that describes the limiting
behavior of a function when the argument tends towards a particular
value or infinity. (..) In computer science, big O notation is used to classify algorithms
according to how their run time or space requirements grow as the
input size grows.
The following statement is not accurate:
For instance taking n as 1, solving O(n log n) will be O(1 log 1) =
O(0). On the same hand O(n) will be O(1)?
One cannot simply perform "O(1 log 1)" since the Big O notation does not represent a function but rather a set of functions with a certain asymptotic upper-bound; as one can read from source:
Big O notation characterizes functions according to their growth
rates: different functions with the same growth rate may be
represented using the same O notation.
Informally, in computer-science time-complexity and space-complexity theories, one can think of the Big O notation as a categorization of algorithms with a certain worst-case scenario concerning time and space, respectively. For instance, O(n):
An algorithm is said to take linear time/space, or O(n) time/space, if its time/space complexity is O(n). Informally, this means that the running time/space increases at most linearly with the size of the input (source).
and O(n log n) as:
An algorithm is said to run in quasilinear time/space if T(n) = O(n log^k n) for some positive constant k; linearithmic time/space is the case k = 1 (source).
Mathematically speaking the statement
I read that O(n log n) is greater than O(n) (..)
is not accurate, since as mentioned before Big O notation represents a set of functions. Hence, more accurate will be O(n log n) contains O(n). Nonetheless, typically such relaxed phrasing is normally used to quantify (for the worst-case scenario) how a set of algorithms behaves compared with another set of algorithms regarding the increase of their input sizes. To compare two classes of algorithms (e.g., O(n log n) and O(n)) instead of
For instance taking n as 1, solving O(n log n) will be O(1 log 1) =
O(0). On the same hand O(n) will be O(1)?
Which actually contradicts O(n log n) > O(n)
you should analyze how both classes of algorithms behaves with the increase of their input size (i.e., n) for the worse-case scenario; analyzing n when it tends to the infinity
As #cem rightly point it out, in the image "big-O denote one of the asymptotically least upper-bounds of the plotted functions, and does not refer to the sets O(f(n))"
As you can see in the image after a certain input, O(n log n) (green line) grows faster than O(n) (yellow line). That is why (for the worst-case) O(n) is more desirable than O(n log n) because one can increase the input size, and the growth rate will increase slower with the former than with the latter.
I'm going to give the you the real answer, even though it seems to be more than one step away from the way you're currently thinking about it...
O(n) and O(n log n) are not numbers, or even functions, and it doesn't quite make sense to say that one is greater than the other. It's sloppy language, but there are actually two accurate statements that might be meant by saying that "O(n log n) is greater than O(n)".
Firstly, O(f(n)), for any function f(n) of n, is the infinite set of all functions that asymptotically grow no faster than f(n). A formal definition would be:
A function g(n) is in O(f(n)) if and only if there are constants n0 and C such that g(n) <= Cf(n) for all n > n0.
So O(n) is a set of functions and O(n log n) is a set of functions, and O(n log n) is a superset of O(n). Being a superset is kind of like being "greater", so if one were to say that "O(n log n) is greater than O(n)", they might be referring to the superset relationship between them.
Secondly, the definition of O(f(n)) makes f(n) an upper bound on the asymptotic growth of functions in the set. And the upper bound is greater for O(n log n) than it is for O(n). In more concrete terms, there a constant n0 such that n log n > n, for all n > n0. The bounding function itself is asymptotically greater, and this is another thing that one might mean when saying "O(n log n) is greater than O(n)".
Finally, both of these things are mathematically equivalent. If g(n) is asymptotically greater than f(n), it follows mathematically that O(g(n)) is a superset of O(f(n)), and if O(g(n)) is a proper superset of O(f(n)), it follows mathematically that g(n) is asymptotically greater than f(n).
Therefore, even though the statement "O(n log n) is greater than O(n)" does not strictly make any sense, it has a clear and unambiguous meaning if you're willing to read it charitably.
The big O notation only has an asymptotic meaning, that is it makes sense only when n goes to infinity.
For example, a time complexity of O(100000) just means your code runs in constant time, which is asymptotically faster (smaller) than O(log n).

What is the complexity of code that does O(n*log n) work, and then O(n^2) work?

I have an algorithm that first does something in O(n*log(n)) time and then does something else in O(n^2) time. Am I correct that the total complexity would be
O(n*log(n) + n^2)
= O(n*(log(n) + n))
= O(n^2)
since log(n) + n is dominated by the + n?
The statement is correct, as O(n log n) is a subset of O(n^2); however, a formal proof would consist out of choosing and constructing suitable constants.
If the call probability of both is equal then you are right. But if the probability of both is not equal you have to do an amortized analysis where you split rare expensive calls (n²) to many fast calls (n log(n)).
For quick sort for example (which generally takes n log(n), but rarly takes n²) you can proof that average running time is n log(n) because of amortized anlysis.
one of the rules of complexity analysis is that you must remove the terms with lower exponent or lower factors.
nlogn vs n^2 (divide both by n)
logn vs n
logn is smaller than n, than you can remove it from the complexity equation
so if the complexity is O(nlogn + n^2), when n is really big, the value of nlogn is not significant if compared to n^2, this is why you remove it and rewrite as O(n^2)

How can an algorithm have two worst case complexities?

Steven Skiena's The Algorithm design manual's chapter 1 exercise has this question:
Let P be a problem. The worst-case time complexity of P is O(n^2) .
The worst-case time complexity of P is also Ω(n log n) . Let A be an
algorithm that solves P. Which subset of the following statements are
consistent with this information about the complexity of P?
A has worst-case time complexity O(n^2) .
A has worst-case time complexity O(n^3/2).
A has worst-case time complexity O(n).
A has worst-case time complexity ⍬(n^2).
A has worst-case time complexity ⍬(n^3) .
How can an algorithm have two worst-case time complexities?
Is the author trying to say that for some value of n (say e.g. 300) upper bound for algorithm written for solving P is of the order of O(n^2) while for another value of n (say e.g. 3000) the same algorithm worst case was Ω(n log n)?
The answer to your specific question
is the author trying to say that for some value of n (say e.g. 300) upper bound for algorithm written for solving P is of the order of O(n^2) while for another value of n (say e.g. 3000) the same algorithm worst case was Ω(n log n)?
is no. That is not how complexity functions work. :) We don't talk about different complexity classes for different values of n. The complexity refers to the entire algorithm, not to the algorithm at specific sizes. An algorithm has a single time complexity function T(n), which computes how many steps are required to carry out the computation for an input size of n.
In the problem, you are given two pieces of information:
The worst case complexity is O(n^2)
The worst case complexity is Ω(n log n)
All this means is that we can pick constants c1, c2, N1, and N2, such that, for our algorithm's function T(n), we have
T(n) ≤ c1*n^2 for all n ≥ N1
T(n) ≥ c2*n log n for all n ≥ N2
In other words, our T(n) is "asymptotically bounded below" by some constant time n log n and "asymptotically bounded above" by some constant times n^2. It can itself be anything "between" an n log n style function and an n^2 style function. It can even be n log n (since that is bounded above by n^2) or it can be n^2 (since that's bounded below by n log n. It can be something in between, like n(log n)(log n).
It's not so much that an algorithm has "multiple worst case complexities" in the sense it has different behaviors. What are you seeing is an upper bound and a lower bound! And these can, of course, be different.
Now it is possible that you have some "weird" function like this:
def p(n):
if n is even:
print n log n stars
else:
print n*2 stars
This crazy algorithm does have the bounds specified in the problem from the Skiena book. And it has no Θ complexity. That might have been what you were thinking about, but do note that it is not necessary for a complexity function to be this weird in order for us to say the upper and lower bounds differ. The thing to remember is that upper and lower bounds are not tight unless explicitly stated to be so.

What is the Complexity of something with 2 algorithms that can solve it

There are 2 algorithms that can give the correct output for a given problem.
Given that the complexity of one algorithm is theta(n log(n)) while the other is theta(n), what is the complexity of the problem? e.g. is it Big O of n log(n), Big Omega of n log(n) or the other way?
Since algorithm is about worst case scenarios do we also consider the worst case scenario in this situation?
Since
f(n) ∈ Θ(g(n))
means that f is bounded both above and below by g asymptotically, and
f(n) ∈ O(g(n))
means that f is bounded above by g asymptotically, clearly
f(n) ∈ Θ(g(n)) ⇒ f(n) ∈ O(g(n)).
Since the problem can be solved by an Θ(n) algorithm, the fact that it can be also solved by an Θ(n log(n)) algorithm is just noise intended to distract: the problem is no harder than Θ(n) (and may, in fact, be simpler). The problem complexity is certainly not bounded below by n log(n), so it isn't Ω() of anything n log(n) and there's no evidence given that it is Ω(n) either (although it is trivially Ω(1)).

What is the big-O of the function (log n)^k

What is the big-O complexity of the function (log n)k for any k?
Any function whose runtime has the form (log n)k is O((log n)k). This expression isn't reducable to any other primitive function using simple transformations, and it's fairly common to see algorithms with runtimes like O(n (log n)2). Functions with this growth rate are called polylogarithmic.
By the way, typically (log n)k is written as logk n, so the above algorithm would have runtime O(n log2 n. In your case, the function log2 n + log n would be O(log2 n).
However, any function with runtime of the form log (nk) has runtime O(log n), assuming that k is a constant. This is because log (nk) = k log n using logarithm identities, and k log n is O(log n) because k is a constant. You should be careful not to blindly conclude that an algorithm that is O(log (nk)) is O(log n), though; if k is a parameter to the function or depends on n, the correct big-O computation would be O(k log n) in this case.
Depending on the context in which you're working, you sometimes see the notation Õ(f(n)) to mean O(f(n) logk n) for some constant k. This is sometimes called "soft-O" and is used in contexts in which the logarithmic terms are irrelevant. In that case, you could say that both functions are Õ(1), though this usage is not common in simple algorithmic analysis (in fact, outside of Wikipedia, I have seen this used precisely once).
Hope this helps!
It will still be (log(n))^2. A logarithm raised to a power is already in the lowest/simplest form.
(log n)^k is:
O((log n)^k)
O(n^k)
O(n)
O(n log n)
O(n^1/2)
O(n^0.00000002)
etc. Which one is meaningful for you depends on the constants and the context.
log(n) is O((log(n))^2) so the entire expression is O((log(n))^2)

Resources