This is a basic question... but I'm thinking that O(M+N) is the same as O(max(M,N)), since the larger term should dominate as we go to infinity? Also, that would be different from O(min(M,N)), is that right? I keep seeing this notation, esp. when discussing graph algorithms. For example, you routinely see: O(|V| + |E|) (e.g., http://algs4.cs.princeton.edu/41undirected/).
Yes, O(M+N) means the same thing as O(max(M, N)). That is different than O(min(M, N)). As #Dr_Asik says, O(M+N) is technically linear O(N) but when M and N have a meaning, it is nice to be able to say "linear in what?" Imagine the algorithm is linear in the number of rows and the number of columns. We can either define N = rows + cols and say O(N) or we can say O(M+N) where M is rows and N is columns.
Linear time is noted O(N). Since (M+N) is a linear function, it should simply be noted O(N) as well. Likewise there is no sense in comparing O(1) to O(2), O(10) etc., they're all constant time and should all be noted O(1).
I know this is an old thread, but as I am studying this now I figured I would add my two cents for those currently searching similar questions.
I would argue that O(n+m), in the context of a graph represented as an adjacency list, is exactly that and cannot be changed for the following reasons:
1) O(n+m) = O(n) + O(m), but O(m) is upper bounded by O(n^2) so that
O(n+m) = O(n) + O(n^2) = O(n^2). However this is purely in terms of n only, that is, it is only taking into account the vertices and giving a weak upper bound (weak because it is trying to represent the edges with vertices). This does show though that O(n) does not equal O(n+m) as there COULD be a quadratic amount of edges when compared to vertices.
2) Saying O(n+m) takes into account all the elements that have to be passed through when implementing an algorithm which is reduced to something like Breadth First Search (BFS). As it takes into account all the elements in the graph exactly once, it can be considered linear and is a more strict analysis that upper bounding the edges with n^2. One could, for the sake of notation write something like n = |V| + |E| thus BFS runs is O(n) and gives the reader a sense of linearity, but generally, as the OP has mentioned, it is written as O(n+m) where
n= |V| and m = |E|.
Thanks a lot, hope this helps someone.
Related
Different algorithms have different time complexity. I've been curious about this one too much.
O(m+n) represents a linear function, just similar to O(m) or O(n), which also represent linear functions. How is O(m+n) any different from O(m) or O(n)? They both represent linear time. In the case of O(n)/O(m), we neglect the other terms and just take the highest degree. Even in the case of the following equation: T(n)=n+1+n+1; we make T(n)=2n and thus make it O(n). Anyhow, we do not take into account the other parts of the equation.
I did read some articles on this and I didn't quite understand what those meant because according to those articles(or maybe I misinterpreted), m and n are for two variables i and j, but if that's the case, then why do we write two-pointer algorithms as O(n^2).
All this is very confusing for me, please explain to me the difference.
m and n might have very different values, that is why O(m+n) is different from O(m) or O(n) (but similar to O(max(m,n)))
Simple example:
Breadth-first search on graphs has complexity O(V+E) where V is vertex count, E is edge count.
For dense graphs E might be as large as V*(V-1)/2, so E~V^2 and we cannot say that complexity is O(V) - in this case it is O(V^2).
On the other side - very sparse graphs, where E is very small compared with V. In this case we cannot say that O(E) - in this case it is O(V).
And O(E+V) is valid in all cases.
I am reading about the Rabin-Karp algorithm on Wikipedia and the time complexity mentioned in there is O(n+m). Now, from my understanding, m is necessarily between 0 and n, so in the best case the complexity is O(n) and in the worst case it is also O(2n)=O(n), so why isn't it just O(n)?
Basically, Robin-Karp expresses it's asymptotic notation as O(m+n) as a means to express the fact that it takes linear time relative to m+n and not just n. Essentially, the variables, m and n, have to mean something whenever you use asymptotic notation. For the case of the Robin-Karp algorithm, n represents the length of the text, and m represents the combined length of both the text and the pattern. Note that O(2n) means the same thing as O(n), because O(2n) is still a linear function of just n. However, in the case of Robin-Karp, m+n isn't really a function of just n. Rather, it's a function of both m and n, which are two independent variables. As such, O(m+n) doesn't mean the same thing as O(n) in the same way that O(2n) equates to O(n).
I hope that makes sense. :-P
m and n measure different dimensions of the input data. Text of length n and patterns of length m is not the same as text of length 2n and patterns of length 0.
O(m+n) tells us that the complexity is proportional to both the length of the text and the length of the patterns.
There are some scenarios where saying complexity in form of O(n+m) is suitable than just saying O(max(m,n)).
Scenario:
Consider BFS(Breadth First Search) or DFS(Depth First Search) as Scenario.
It will be more intuitive and will convey more information to say that the complexity is O(E+V) than max{E,V}. The former is in Sync with actual algorithmic description.
It is easy to see that the time complexity of depth-first search is O(|V|)
But, recently, I read a book that said:
If this process is performed on a tree, then all tree vertices are systematically visited in a total of O(|E|) time, since |E| = Theta(|V|)
I can not understand O(Theta(|V|)).
What is the difference between O(|V|) and O(Theta(|V|))?
The short answer:
O(|E|) means it runs in time linear to the amount of edges.
|E| = Theta(|V|) means |E| in Theta(|V|) because O(.), Theta(.), ... are sets. Computer scientists are lazy and sometimes write = instead of in. Knowing that it means the amount of edges scales linear with the amount of nodes.
O(|V|) means it runs in time linear to the amount of nodes.
O(Theta(|V|)) is a statement that makes no sense. O(.) is something that wants a function, not a set and Theta(.) is a set.
f in O(g) gives you an upper bound like "f is better than g (or equally good)".
For example n in O(n^2).
f in Omega(g) gives you a lower bound like "f is not better than g (or equally good)".
For example n^2 in Omega(n).
And f in Theta(g) means that both holds, "f is somewhat the same than g".
For example 2n + 4 in Theta(n) because 2n + 4 in O(n) and 2n + 4 in Omega(n).
There are also small o and small omega which exchange the <= of Big-O and the >= of Big-Omega by strict comparisons < and >, so the "or equally good" gets dropped.
I put everything in quotation marks because it is all in respect to the meaning of Big-O-Notation, so "the same" in terms of asymptotic growth.
The exact definitions can be found at Wikipedia.
We now approach your specific scenario. In trees the number of edges is always bounded by the number of vertices. Because a node can maximally have one edge to a parent and one to each children. It can not have multiple edges to the same child for example.
By the way the exact number of edges in a tree is always |E| = |V| - 1. Because there is exactly one edge for every node (coming from the parent), excluding the root.
So we have |E| in Theta(|V|), because in terms of Big-O-Notation (asymptotic growth) they are "the same". So every tree-algorithm that runs in O(|E|) can for example be seen as running in O(|V|).
Indeed many algorithms run in Theta(.) and not only in O(.) but most times only O(.) is interesting so they just leave the rest. Omega(.) or Theta(.) are more commonly seen when analyzing problems in general. For example one can prove that any possible comparison based sorting algorithm can not be faster than Omega(n * log(n)) (search query to find the proof).
Do you mean theta(V)? if that, then it is easy.
Before that, you need to know the definition of Big-O,it means
the asymptotic tight upper bound.
if f(n)=O(g(n)), there exist const c and n0, if n>=n0, 0<=f(n)<=c*g(n)
For example if the time complexity is 3n, then it is belong the level of
O(n). Also, if the complexity is 3, it still belong to O(n).
Theta(n) means the time complexity is bigger than n, but is under O(n),
at this situation, we call it theta(n)
for i = 0 to size(arr)
for o = i + 1 to size(arr)
do stuff here
What's the worst-time complexity of this? It's not N^2, because the second one decreases by one every i loop. It's not N, it should be bigger. N-1 + N-2 + N-3 + ... + N-N+1.
It is N ^ 2, since it's the product of two linear complexities.
(There's a reason asymptotic complexity is called asymptotic and not identical...)
See Wikipedia's explanation on the simplifications made.
Think of it like you are working with a n x n matrix. You are approximately working on half of the elements in the matrix, but O(n^2/2) is the same as O(n^2).
When you want to determine the complexity class of an algorithm, all you need is to find the fastest growing term in the complexity function of the algorithm. For example, if you have complexity function f(n)=n^2-10000*n+400, to find O(f(n)), you just have to find the "strongest" term in the function. Why? Because for n big enough, only that term dictates the behavior of the entire function. Having said that, it is easy to see that both f1(n)=n^2-n-4 and f2(n)=n^2 are in O(n^2). However, they, for the same input size n, don't run for the same amount of time.
In your algorithm, if n=size(arr), the do stuff here code will run f(n)=n+(n-1)+(n-2)+...+2+1 times. It is easy to see that f(n) represents a sum of an arithmetic series, which means f(n)=n*(n+1)/2, i.e. f(n)=0.5*n^2+0.5*n. If we assume that do stuff here is O(1), then your algorithm has O(n^2) complexity.
for i = 0 to size(arr)
I assumed that the loop ends when i becomes greater than size(arr), not equal to. However, if the latter is the case, than f(n)=0.5*n^2-0.5*n, and it is still in O(n^2). Remember that O(1),O(n),0(n^2),... are complexity classes, and that complexity functions of algorithms are functions that describe, for the input size n, how many steps there is in the algorithm.
It's n*(n-1)/2 which is equal to O(n^2).
So most everyone should know that max_element(unsorted_array) can be solved in O(n) time. I realized that while that it is easy to compute that, it seems it would be much harder to solve it in a less than optimal solution, such as n*log(log(n)) time. Now obviously an algorithm could simply be O(n + n*log(log(n)) ), where the more time consuming part of the algorithm has no real purpose. At the same time, you could just use the regular O(n) algorithm log(log(n)) times. Neither of these are very interesting.
So my question is, is there an algorithm that can find the max element in a set of numbers (stored in the container of your choice) in a way that there is no redundant loops or operations, but is Θ(n*log(log(n))) ?
Van Emde Boas Trees?
There is a basic misconception here:
O(n + n*log(log(n)) ) is exactly identical to O(n log(log(n)))
Please read the wiki page carefully: http://en.wikipedia.org/wiki/Big_O_notation
The Big-O notation is asymptotic. This means that O(f(n) + g(n)) = O(max(f(n), g(n))) for all functions f, g. This is not a trick, they are really equal.
Symbols like O(n^2), O(n), etc., are not functions, they are sets; specifically O(f(n)) means "the set of all functions which are asymptotically less than or equal to a constant times f(n)". If f(n) >= g(n), then O(f(n)) contains O(g(n)) and so adding g(n) into that equation changes nothing.
How about proof that it cannot be done.
Theory: It is possible to determine the maximum element of an unsorted array without examining every element.
Assume you have examined all but one element of the unsorted array of n (n>1) items.
There are two possibilities for largest element of the array.
The largest element you have yet seen (out of n-1).
The one element you have not seen
The unexamined element could be bigger (unless an examined element is the absolute maximum value representable); the array is unsorted.
Result: Contradiction. You must examine the nth element in order to determine the maximum (in a mathematics context; you can take a shortcut in computer science under one probably rare circumstance)
Since it doesn't matter what value n has for this, it should apply for all n except the degenerate case (n=1)
If this isn't a valid response, I may be unclear on the requirements... ?