Algorithm as an exercise - algorithm

So most everyone should know that max_element(unsorted_array) can be solved in O(n) time. I realized that while that it is easy to compute that, it seems it would be much harder to solve it in a less than optimal solution, such as n*log(log(n)) time. Now obviously an algorithm could simply be O(n + n*log(log(n)) ), where the more time consuming part of the algorithm has no real purpose. At the same time, you could just use the regular O(n) algorithm log(log(n)) times. Neither of these are very interesting.
So my question is, is there an algorithm that can find the max element in a set of numbers (stored in the container of your choice) in a way that there is no redundant loops or operations, but is Θ(n*log(log(n))) ?

Van Emde Boas Trees?

There is a basic misconception here:
O(n + n*log(log(n)) ) is exactly identical to O(n log(log(n)))
Please read the wiki page carefully: http://en.wikipedia.org/wiki/Big_O_notation
The Big-O notation is asymptotic. This means that O(f(n) + g(n)) = O(max(f(n), g(n))) for all functions f, g. This is not a trick, they are really equal.
Symbols like O(n^2), O(n), etc., are not functions, they are sets; specifically O(f(n)) means "the set of all functions which are asymptotically less than or equal to a constant times f(n)". If f(n) >= g(n), then O(f(n)) contains O(g(n)) and so adding g(n) into that equation changes nothing.

How about proof that it cannot be done.
Theory: It is possible to determine the maximum element of an unsorted array without examining every element.
Assume you have examined all but one element of the unsorted array of n (n>1) items.
There are two possibilities for largest element of the array.
The largest element you have yet seen (out of n-1).
The one element you have not seen
The unexamined element could be bigger (unless an examined element is the absolute maximum value representable); the array is unsorted.
Result: Contradiction. You must examine the nth element in order to determine the maximum (in a mathematics context; you can take a shortcut in computer science under one probably rare circumstance)
Since it doesn't matter what value n has for this, it should apply for all n except the degenerate case (n=1)
If this isn't a valid response, I may be unclear on the requirements... ?

Related

Is Big Omega of any linear algorithm n or can it also be 1?

If we have a linear algorithm (for example, find if a number exists in a given array of numbers), does this mean that Omega(n) = n? The number of steps would be n. And the tightest bound I can make is c*n where c = 1.
But as far as I know, Omega also describes the best case scenario which in this case would be 1 because the searched element can be on the first position of the array and that accounts for only one step. So, by this logic, Omega(n) = 1.
Which variant is the correct one and why? Thanks.
There is a large confusion about what is described using the asymptotic notation.
The running time of an algorithm is in general a function of the number of elements, but also of the particular values of the inputs. Hence T(x) where x is an input of n elements is not a function of n alone.
Now one can study the worst-case and best-case: to determine these, you choose a configuration of the input corresponding to the slowest or fastest execution time and these are functions of n only. And additional option is the expected (or average) running-time, which corresponds to a given statistical distribution of the input. This is also a function of n alone.
Now, Tworst(n), Tbest(n), Texpected(n) can have upper bounds, denoted by O(f(n)), and lower bounds, denoted by Ω(f(n)). When these bounds coincide, the notation Θ(f(n)) is used.
In the case of a linear search, the best case is Θ(1) and the worst and expected cases are Θ(n). Hence the running time for arbitrary input is Ω(1)
and O(n).
Addendum:
The Graal of algorithmics is the discovery of efficient algorithms, i.e. such that the effective running time is of the same order as the best behavior that can be achieved independently of any algorithm.
For instance, it is obvious that the worst-case of any search algorithm is Ω(n) because whatever the search order, you may have to perform n comparisons (for instance if the key is not there). As the linear search is worst-case O(n), it is worst-case efficient. It is also best-case efficient, but this is not so interesting.
If you have an linear time algorithm that means that the time complexity has a linear upper bound, namely O(n). This does not mean that it also has a linear lower bound. In your example, finding out if a element exits, the lower bound is Ω(1). Here is Ω(n) just wrong.
Doing a linear search on an array, to find the minimal element takes exactly n steps in all cases. So here is the lower bound Ω(n). But Ω(1) would also be right, since a constant number of steps is also a lower bound for n steps, but it is no tight lower bound.

Is Manacher's algorithm really linear?

I recently read a wikipedia article on Manacher's algorithm and after seeing the sample implementation and dozens of other implementations... I'll be honest I have no clue how this algorithm is linear. The way I see it, it's rather in the best case O(n+n/2) but that's not linear, is it?
http://en.wikipedia.org/wiki/Longest_palindromic_substring
For each character within the original string we're trying to expand the P array in both directions until we either reach a string boundary or the symmetrical property is not satisfied. If it would only be like so this'd mean O(n^2) but with the extra observations will be less than that. Still at most I could get my head down to O(n+n/2) but not to O(n) as that would esentially mean the internal nested loop o(1). Anything higher than that and it's higher breaks the linearity for the whole algorithm.
so in a nutshell, how is this algorithm linear?
O(n + n/2) is linear, O(n+n/2) ~ O(n)
The time is still proportional to n.
Or, being more precice, the limit of (n+n/2) / n as n goes to infinity (and also when it doesn't) is a finite constant. So O(n+n/2) and O(n) are equivalent.

What is the meaning of O(M+N)?

This is a basic question... but I'm thinking that O(M+N) is the same as O(max(M,N)), since the larger term should dominate as we go to infinity? Also, that would be different from O(min(M,N)), is that right? I keep seeing this notation, esp. when discussing graph algorithms. For example, you routinely see: O(|V| + |E|) (e.g., http://algs4.cs.princeton.edu/41undirected/).
Yes, O(M+N) means the same thing as O(max(M, N)). That is different than O(min(M, N)). As #Dr_Asik says, O(M+N) is technically linear O(N) but when M and N have a meaning, it is nice to be able to say "linear in what?" Imagine the algorithm is linear in the number of rows and the number of columns. We can either define N = rows + cols and say O(N) or we can say O(M+N) where M is rows and N is columns.
Linear time is noted O(N). Since (M+N) is a linear function, it should simply be noted O(N) as well. Likewise there is no sense in comparing O(1) to O(2), O(10) etc., they're all constant time and should all be noted O(1).
I know this is an old thread, but as I am studying this now I figured I would add my two cents for those currently searching similar questions.
I would argue that O(n+m), in the context of a graph represented as an adjacency list, is exactly that and cannot be changed for the following reasons:
1) O(n+m) = O(n) + O(m), but O(m) is upper bounded by O(n^2) so that
O(n+m) = O(n) + O(n^2) = O(n^2). However this is purely in terms of n only, that is, it is only taking into account the vertices and giving a weak upper bound (weak because it is trying to represent the edges with vertices). This does show though that O(n) does not equal O(n+m) as there COULD be a quadratic amount of edges when compared to vertices.
2) Saying O(n+m) takes into account all the elements that have to be passed through when implementing an algorithm which is reduced to something like Breadth First Search (BFS). As it takes into account all the elements in the graph exactly once, it can be considered linear and is a more strict analysis that upper bounding the edges with n^2. One could, for the sake of notation write something like n = |V| + |E| thus BFS runs is O(n) and gives the reader a sense of linearity, but generally, as the OP has mentioned, it is written as O(n+m) where
n= |V| and m = |E|.
Thanks a lot, hope this helps someone.

What's wrong with this inductive proof that mergesort is O(n)?

Comparison based sorting is big omega of nlog(n), so we know that mergesort can't be O(n). Nevertheless, I can't find the problem with the following proof:
Proposition P(n): For a list of length n, mergesort takes O(n) time.
P(0): merge sort on the empty list just returns the empty list.
Strong induction: Assume P(1), ..., P(n-1) and try to prove P(n). We know that at each step in a recursive mergesort, two approximately "half-lists" are mergesorted and then "zipped up". The mergesorting of each half list takes, by induction, O(n/2) time. The zipping up takes O(n) time. So the algorithm has a recurrence relation of M(n) = 2M(n/2) + O(n) which is 2O(n/2) + O(n) which is O(n).
Compare the "proof" that linear search is O(1).
Linear search on an empty array is O(1).
Linear search on a nonempty array compares the first element (O(1)) and then searches the rest of the array (O(1)). O(1) + O(1) = O(1).
The problem here is that, for the induction to work, there must be one big-O constant that works both for the hypothesis and the conclusion. That's impossible here and impossible for your proof.
The "proof" only covers a single pass, it doesn't cover the log n number of passes.
The recurrence only shows the cost of a pass as compared to the cost of the previous pass. To be correct, the recurrence relation should have the cumulative cost rather than the incremental cost.
You can see where the proof falls down by viewing the sample merge sort at http://en.wikipedia.org/wiki/Merge_sort
Here is the crux: all induction steps which refer to particular values of n must refer to a particular function T(n), not to O() notation!
O(M(n)) notation is a statement about the behavior of the whole function from problem size to performance guarantee (asymptotically, as n increases without limit). The goal of your induction is to determine a performance bound T(n), which can then be simplified (by dropping constant and lower-order factors) to O(M(n)).
In particular, one problem with your proof is that you can't get from your statement purely about O() back to a statement about T(n) for a given n. O() notation allows you to ignore a constant factor for an entire function; it doesn't allow you to ignore a constant factor over and over again while constructing the same function recursively...
You can still use O() notation to simplify your proof, by demonstrating:
T(n) = F(n) + O(something less significant than F(n))
and propagating this predicate in the usual inductive way. But you need to preserve the constant factor of F(): this constant factor has direct bearing on the solution of your divide-and-conquer recurrence!

Trying to understand Big-oh notation

Hi I would really appreciate some help with Big-O notation. I have an exam in it tomorrow and while I can define what f(x) is O(g(x)) is, I can't say I thoroughly understand it.
The following question ALWAYS comes up on the exam and I really need to try and figure it out, the first part seems easy (I think) Do you just pick a value for n, compute them all on a claculator and put them in order? This seems to easy though so I'm not sure. I'm finding it very hard to find examples online.
From lowest to highest, what is the
correct order of the complexities
O(n2), O(log2 n), O(1), O(2n), O(n!),
O(n log2 n)?
What is the
worst-case computational-complexity of
the Binary Search algorithm on an
ordered list of length n = 2k?
That guy should help you.
From lowest to highest, what is the
correct order of the complexities
O(n2), O(log2 n), O(1), O(2n), O(n!),
O(n log2 n)?
The order is same as if you compare their limit at infinity. like lim(a/b), if it is 1, then they are same, inf. or 0 means one of them is faster.
What is the worst-case
computational-complexity of the Binary
Search algorithm on an ordered list of
length n = 2k?
Find binary search best/worst Big-O.
Find linked list access by index best/worst Big-O.
Make conclusions.
Hey there. Big-O notation is tough to figure out if you don't really understand what the "n" means. You've already seen people talking about how O(n) == O(2n), so I'll try to explain exactly why that is.
When we describe an algorithm as having "order-n space complexity", we mean that the size of the storage space used by the algorithm gets larger with a linear relationship to the size of the problem that it's working on (referred to as n.) If we have an algorithm that, say, sorted an array, and in order to do that sort operation the largest thing we did in memory was to create an exact copy of that array, we'd say that had "order-n space complexity" because as the size of the array (call it n elements) got larger, the algorithm would take up more space in order to match the input of the array. Hence, the algorithm uses "O(n)" space in memory.
Why does O(2n) = O(n)? Because when we talk in terms of O(n), we're only concerned with the behavior of the algorithm as n gets as large as it could possibly be. If n was to become infinite, the O(2n) algorithm would take up two times infinity spaces of memory, and the O(n) algorithm would take up one times infinity spaces of memory. Since two times infinity is just infinity, both algorithms are considered to take up a similar-enough amount of room to be both called O(n) algorithms.
You're probably thinking to yourself "An algorithm that takes up twice as much space as another algorithm is still relatively inefficient. Why are they referred to using the same notation when one is much more efficient?" Because the gain in efficiency for arbitrarily large n when going from O(2n) to O(n) is absolutely dwarfed by the gain in efficiency for arbitrarily large n when going from O(n^2) to O(500n). When n is 10, n^2 is 10 times 10 or 100, and 500n is 500 times 10, or 5000. But we're interested in n as n becomes as large as possible. They cross over and become equal for an n of 500, but once more, we're not even interested in an n as small as 500. When n is 1000, n^2 is one MILLION while 500n is a "mere" half million. When n is one million, n^2 is one thousand billion - 1,000,000,000,000 - while 500n looks on in awe with the simplicity of it's five-hundred-million - 500,000,000 - points of complexity. And once more, we can keep making n larger, because when using O(n) logic, we're only concerned with the largest possible n.
(You may argue that when n reaches infinity, n^2 is infinity times infinity, while 500n is five hundred times infinity, and didn't you just say that anything times infinity is infinity? That doesn't actually work for infinity times infinity. I think. It just doesn't. Can a mathematician back me up on this?)
This gives us the weirdly counterintuitive result where O(Seventy-five hundred billion spillion kajillion n) is considered an improvement on O(n * log n). Due to the fact that we're working with arbitrarily large "n", all that matters is how many times and where n appears in the O(). The rules of thumb mentioned in Julia Hayward's post will help you out, but here's some additional information to give you a hand.
One, because n gets as big as possible, O(n^2+61n+1682) = O(n^2), because the n^2 contributes so much more than the 61n as n gets arbitrarily large that the 61n is simply ignored, and the 61n term already dominates the 1682 term. If you see addition inside a O(), only concern yourself with the n with the highest degree.
Two, O(log10n) = O(log(any number)n), because for any base b, log10(x) = log_b(*x*)/log_b(10). Hence, O(log10n) = O(log_b(x) * 1/(log_b(10)). That 1/log_b(10) figure is a constant, which we've already shown drop out of O(n) notation.
Very loosely, you could imagine picking extremely large values of n, and calculating them. Might exceed your calculator's range for large factorials, though.
If the definition isn't clear, a more intuitive description is that "higher order" means "grows faster than, as n grows". Some rules of thumb:
O(n^a) is a higher order than O(n^b) if a > b.
log(n) grows more slowly than any positive power of n
exp(n) grows more quickly than any power of n
n! grows more quickly than exp(kn)
Oh, and as far as complexity goes, ignore the constant multipliers.
That's enough to deduce that the correct order is O(1), O(log n), O(2n) = O(n), O(n log n), O(n^2), O(n!)
For big-O complexities, the rule is that if two things vary only by constant factors, then they are the same. If one grows faster than another ignoring constant factors, then it is bigger.
So O(2n) and O(n) are the same -- they only vary by a constant factor (2). One way to think about it is to just drop the constants, since they don't impact the complexity.
The other problem with picking n and using a calculator is that it will give you the wrong answer for certain n. Big O is a measure of how fast something grows as n increases, but at any given n the complexities might not be in the right order. For instance, at n=2, n^2 is 4 and n! is 2, but n! grows quite a bit faster than n^2.
It's important to get that right, because for running times with multiple terms, you can drop the lesser terms -- ie, if O(f(n)) is 3n^2+2n+5, you can drop the 5 (constant), drop the 2n (3n^2 grows faster), then drop the 3 (constant factor) to get O(n^2)... but if you don't know that n^2 is bigger, you won't get the right answer.
In practice, you can just know that n is linear, log(n) grows more slowly than linear, n^a > n^b if a>b, 2^n is faster than any n^a, and n! is even faster than that. (Hint: try to avoid algorithms that have n in the exponent, and especially avoid ones that are n!.)
For the second part of your question, what happens with a binary search in the worst case? At each step, you cut the space in half until eventually you find your item (or run out of places to look). That is log2(2k). A search where you just walk through the list to find your item would take n steps. And we know from the first part that O(log(n)) < O(n), which is why binary search is faster than just a linear search.
Good luck with the exam!
In easy to understand terms the Big-O notation defines how quickly a particular function grows. Although it has its roots in pure mathematics its most popular application is the analysis of algorithms which can be analyzed on the basis of input size to determine the approximate number of operations that must be performed.
The benefit of using the notation is that you can categorize function growth rates by their complexity. Many different functions (an infinite number really) could all be expressed with the same complexity using this notation. For example, n+5, 2*n, and 4*n + 1/n all have O(n) complexity because the function g(n)=n most simply represents how these functions grow.
I put an emphasis on most simply because the focus of the notation is on the dominating term of the function. For example, O(2*n + 5) = O(2*n) = O(n) because n is the dominating term in the growth. This is because the notation assumes that n goes to infinity which causes the remaining terms to play less of a role in the growth rate. And, by convention, any constants or multiplicatives are omitted.
Read Big O notation and Time complexity for more a more in depth overview.
See this and look up for solutions here is first one.

Resources