How to test an algorithm for perfect optimization? - algorithm

Is there any way to test an algorithm for perfect optimization?

There is no easy way to prove that any given algorithm is asymptotically optimal.
Proving optimality (if ever) sometimes follows years and/or decades after the algorithm has been written. A classic example is the Union-Find/disjoint-set data structure.
Disjoint-set forests are a data structure where each set is represented by a tree data structure, in which each node holds a reference to its parent node. They were first described by Bernard A. Galler and Michael J. Fischer in 1964, although their precise analysis took years.
[...] These two techniques complement each other; applied together, the amortized time per operation is only O(α(n)), where α(n) is the inverse of the function f(n) = A(n,n), and A is the extremely quickly-growing Ackermann function.
[...] In fact, this is asymptotically optimal: Fredman and Saks showed in 1989 that Ω(α(n)) words must be accessed by any disjoint-set data structure per operation on average.
For some algorithms optimality can be proven after very careful analysis, but generally speaking, there's no easy way to tell if an algorithm is optimal once it's written. In fact, it's not always easy to prove if the algorithm is even correct.
See also
Wikipedia/Matrix multiplication
The naive algorithm is O(N3), Strassen's is roughly O(N2.807), Coppersmith-Winograd is O(N2.376), and we still don't know what is optimal.
Wikipedia/Asymptotically optimal
it is an open problem whether many of the most well-known algorithms today are asymptotically optimal or not. For example, there is an O(nα(n)) algorithm for finding minimum spanning trees. Whether this algorithm is asymptotically optimal is unknown, and would be likely to be hailed as a significant result if it were resolved either way.
Practical considerations
Note that sometimes asymptotically "worse" algorithms are better in practice due to many factors (e.g. ease of implementation, actually better performance for the given input parameter range, etc).
A typical example is quicksort with a simple pivot selection that may exhibit quadratic worst-case performance, but is still favored in many scenarios over a more complicated variant and/or other asymptotically optimal sorting algorithms.

For those among us mortals that merely want to know if an algorithm:
reasonably works as expected;
is faster than others;
there is an easy step called 'benchmark'.
Pick up the best contenders in the area and compare them with your algorithm.
If your algorithm wins then it better matches your needs (the ones defined by
your benchmarks).

Related

How efficient is efficient when it comes to Polynomial Time Algorithms?

I hope this is the right place for this question.
Polynomial time algorithms! How do polynomial time algorithms (PTAs) actually relate to the processing power, memory size (RAM) and storage of computers?
We consider PTAs to be efficient. We know that even for a PTA, the time complexity increases with the input size n. Take for example, there already exists a PTA that determines if a number is prime. But what happens if I want to check a number this big https://justpaste.it/3fnj2? Is the PTA for prime check still considered efficient? Is there a computer that can compute if such a big number like that is prime?
Whether yes or no (maybe no, idk), how does the concept of polynomial time algorithms actually apply in the real world? Is their some computing bound or something for so-called polynomial time algorithms?
I've tried Google searches on this but all I find are mathematical Big O related explanations. I don't find articles that actual relate the concept of PTAs to computing power. I would appreciate some explanation or links to some resources.
There are a few things to explain.
Regarding Polynomial Time as efficient is just an arbitrary agreement. The mathematicians have defined a set Efficient_Algorithms = {P algorithm, where P Polynomial}. That is only a mathematical definition. Mathematicians don't see your actual hardware and they don't care for it. They work with abstract concepts. Yes, scientists consider O(n^100) as efficient.
But you cannot compare one to one statements from theoretical computer science with computer programs running on hardware. Scientists work with formulas and theorems while computer programs are executed on electric circuits.
The Big-Oh notation does not help you for comparing implementations of an algorithms. The Big-Oh notation compares algorithms but not the implementations of them. This can be illustrated as follows. Consider you have a prime checking algorithm with a high polynomial complexity. You implement it and you see it does not perform well for practical use cases. So you use a profiler. It tells you where the bottle neck is. You find out that 98% of the computations time are matrix multiplications. So you develop a processor that does exactly such calculations extremely fast. Or you buy the most modern graphics card for this purpose. Or you wait 150 years for a new hardware generation. Or you achieve to make most of these multiplications parallel. Imagine you achieved somehow to reduce the time for matrix multiplications by 95%. With this wonderful hardware you run your algorithm. And suddenly it performs well. So your algorithm is actually efficient. It was only your hardware that was not powerful enough. This is not an thought experiment. Such dramatic improvements of computation power are reality quite often.
The very most of algorithms that have a polynomial complexity have such because the problems they are solving are actually of polynomial complexity. Consider for example the matrix multiplication. If you do it on paper it is O(n^3). It is the nature of this problem that it has a polynomial complexity. In practice and daily life (I think) most problems for which you have a polynomial algorithm are actually polynomial problems. If you have a polynomial problem, then a polynomial algorithm is efficient.
Why do we talk about polynomial algorithms and why do we consider them as efficient? As already said, this is quite arbitrary. But as a motivation the following words may be helpful. When talking about "polynomial algorithms", we can say there are two types of them.
The algorithms that have a complexity that is even lower than polynomial (e.t. linear or logarithmic). I think we can agree to say these are efficient.
The algorithms that are actually polynomial and not lower than polynomial. As illustrated above, in practice these algorithms are oftentimes polynomial because they solve problems that are actually of polynomial nature and therefore require polynomial complexity. If you see it this way, then of course we can say, these algorithms are efficient.
In practice if you have a linear problem you will normally recognise it as a linear problem. Normally you would not apply an algorithm that has a worse complexity to a linear problem. This is just practical experience. If you for example search an element in a list you would not expect more comparisons than the number of elements in the list. If in such cases you apply an algorithm that has a complexity O(n^2), then of course this polynomial algorithm is not efficient. But as said, such mistakes are oftentimes so obvious, that they don't happen.
So that is my final answer to your question: In practice software developers have a good feeling for linear complexity. Good developers also have a feeling of logarithmic complexity in real life. In consequence that means, you don't have to worry about complexity theory too much. If you have polynomial algorithm, then you normally have a quite good feeling to tell if the problem itself is actually linear or not. If this is not the case, then your algorithm is efficient. If you have an exponential algorithm, it may not be obvious what is going on. But in practice you see the computation time, do some experiments or get complains from users. Exponential complexity is normally not deniable.

Difference between a stochastic and a heuristic algorithm

Extending the question of streetparade, I would like to ask what is the difference, if any, between a stochastic and a heuristic algorithm.
Would it be right to say that a stochastic algorithm is actually one type of heuristic?
TTBOMK, "stochastic algorithm" is not a standard term. "Randomized algorithm" is, however, and it's probably what is meant here.
Randomized: Uses randomness somehow. There are two flavours: Monte Carlo algorithms always finish in bounded time, but don't guarantee an optimal solution, while Las Vegas algorithms aren't necessarily guaranteed to finish in any finite time, but promise to find the optimal solution. (Usually they are also required to have a finite expected running time.) Examples of common Monte Carlo algorithms: MCMC, simulated annealing, and Miller-Rabin primality testing. Quicksort with randomized pivot choice is a Las Vegas algorithm that always finishes in finite time. An algorithm that does not use any randomness is deterministic.
Heuristic: Not guaranteed to find the correct answer. An algorithm that is not heuristic is exact.
Many heuristics are sensitive to "incidental" properties of the input that don't affect the true solution, such as the order items are considered in the First-Fit heuristic for the Bin Packing problem. In this case they can be thought of as Monte Carlo randomized algorithms: you can randomly permute the inputs and rerun them, always keeping the best answer you find. OTOH, other heuristics don't have this property -- e.g. the First-Fit-Decreasing heuristic is deterministic, since it always first sorts the items in decreasing size order.
If the set of possible outputs of a particular randomized algorithm is finite and contains the true answer, then running it long enough is "practically guaranteed" to eventually find it (in the sense that the probability of not finding it can be made arbitrarily small, but never 0). Note that it's not automatically the case that some permutation of the inputs to a heuristic will result in getting the exact answer -- in the case of First-Fit, it turns out that this is true, but this was only proven in 2009.
Sometimes stronger statements about convergence of randomized algorithms can be made: these are usually along the lines of "For any given small threshold d, after t steps we will be within d of the optimal solution with probability f(t, d)", with f(t, d) an increasing function of t and d.
Booth approaches are usually used to speed up genere and test solutions to NP complete problems
Stochastic algorithms use randomness
They use all combinations but not in order but instead they use random ones from the whole range of possibilities hoping to hit the solution sooner. Implementation is fast easy and single iteration is also fast (constant time)
Heuristics algorithms
They pick up the combinations not randomly but based on some knowledge on used process, input dataset, or usage instead. So they lower the number of combinations significantly to only those they are probably the solution and use only those but usually all of them until solution is found.
Implementation complexity depends on the problem, single iteration is usually much much slower then stochastic approach (constant time) so heuristics is used only if the number of possibilities is lowered enough to actual speed up is visible because even if algorithm complexity with heuristic is usually much lower sometimes the constant time is big enough to even slow things down ... (in runtime terms)
Booth approaches can be combined together

I need to solve an NP-hard problem. Is there hope?

There are a lot of real-world problems that turn out to be NP-hard. If we assume that P ≠ NP, there aren't any polynomial-time algorithms for these problems.
If you have to solve one of these problems, is there any hope that you'll be able to do so efficiently? Or are you just out of luck?
If a problem is NP-hard, under the assumption that P ≠ NP there is no algorithm that is
deterministic,
exactly correct on all inputs all the time, and
efficient on all possible inputs.
If you absolutely need all of the above guarantees, then you're pretty much out of luck. However, if you're willing to settle for a solution to the problem that relaxes some of these constraints, then there very well still might be hope! Here are a few options to consider.
Option One: Approximation Algorithms
If a problem is NP-hard and P ≠ NP, it means that there's is no algorithm that will always efficiently produce the exactly correct answer on all inputs. But what if you don't need the exact answer? What if you just need answers that are close to correct? In some cases, you may be able to combat NP-hardness by using an approximation algorithm.
For example, a canonical example of an NP-hard problem is the traveling salesman problem. In this problem, you're given as input a complete graph representing a transportation network. Each edge in the graph has an associated weight. The goal is to find a cycle that goes through every node in the graph exactly once and which has minimum total weight. In the case where the edge weights satisfy the triangle inequality (that is, the best route from point A to point B is always to follow the direct link from A to B), then you can get back a cycle whose cost is at most 3/2 optimal by using the Christofides algorithm.
As another example, the 0/1 knapsack problem is known to be NP-hard. In this problem, you're given a bag and a collection of objects with different weights and values. The goal is to pack the maximum value of objects into the bag without exceeding the bag's weight limit. Even though computing an exact answer requires exponential time in the worst case, it's possible to approximate the correct answer to an arbitrary degree of precision in polynomial time. (The algorithm that does this is called a fully polynomial-time approximation scheme or FPTAS).
Unfortunately, we do have some theoretical limits on the approximability of certain NP-hard problems. The Christofides algorithm mentioned earlier gives a 3/2 approximation to TSP where the edges obey the triangle inequality, but interestingly enough it's possible to show that if P ≠ NP, there is no polynomial-time approximation algorithm for TSP that can get within any constant factor of optimal. Usually, you need to do some research to learn more about which problems can be well-approximated and which ones can't, since many NP-hard problems can be approximated well and many can't. There doesn't seem to be a unified theme.
Option Two: Heuristics
In many NP-hard problems, standard approaches like greedy algortihms won't always produce the right answer, but often do reasonably well on "reasonable" inputs. In many cases, it's reasonable to attack NP-hard problems with heuristics. The exact definition of a heuristic varies from context to context, but typically a heuristic is either an approach to a problem that "often" gives back good answers at the cost of sometimes giving back wrong answers, or is a useful rule of thumb that helps speed up searches even if it might not always guide the search the right way.
As an example of the first type of heuristic, let's look at the graph-coloring problem. This NP-hard problem asks, given a graph, to find the minimum number of colors necessary to paint the nodes in the graph such that no edge's endpoints are the same color. This turns out to be a particularly tough problem to solve with many other approaches (the best known approximation algorithms have terrible bounds, and it's not suspected to have a parameterized efficient algorithm). However, there are many heuristics for graph coloring that do quite well in practice. Many greedy coloring heuristics exist for assigning colors to nodes in a reasonable order, and these heuristics often do quite well in practice. Unfortunately, sometimes these heuristics give terrible answers back, but provided that the graph isn't pathologically constructed the heuristics often work just fine.
As an example of the second type of heuristic, it's helpful to look at SAT solvers. SAT, the Boolean satisfiability problem, was the first problem proven to be NP-hard. The problem asks, given a propositional formula (often written in conjunctive normal form), to determine whether there is a way to assign values to the variables such that the overall formula evaluates to true. Modern SAT solvers are getting quite good at solving SAT in many cases by using heuristics to guide their search over possible variable assignments. One famous SAT-solving algorithm, DPLL, essentially tries all possible assignments to see if the formula is satisfiable, using heuristics to speed up the search. For example, if it finds that a variable is either always true or always false, DPLL will try assigning that variable its forced value before trying other variables. DPLL also finds unit clauses (clauses with just one literal) and sets those variables' values before trying other variables. The net effect of these heuristics is that DPLL ends up being very fast in practice, even though it's known to have exponential worst-case behavior.
Option Three: Pseudopolynomial-Time Algorithms
If P ≠ NP, then no NP-hard problem can be solved in polynomial time. However, in some cases, the definition of "polynomial time" doesn't necessarily match the standard intuition of polynomial time. Formally speaking, polynomial time means polynomial in the number of bits necessary to specify the input, which doesn't always sync up with what we consider the input to be.
As an example, consider the set partition problem. In this problem, you're given a set of numbers and need to determine whether there's a way to split the set into two smaller sets, each of which has the same sum. The naive solution to this problem runs in time O(2n) and works by just brute-force testing all subsets. With dynamic programming, though, it's possible to solve this problem in time O(nN), where n is the number of elements in the set and N is the maximum value in the set. Technically speaking, the runtime O(nN) is not polynomial time because the numeric value N is written out in only log2 N bits, but assuming that the numeric value of N isn't too large, this is a perfectly reasonable runtime.
This algorithm is called a pseudopolynomial-time algorithm because the runtime O(nN) "looks" like a polynomial, but technically speaking is exponential in the size of the input. Many NP-hard problems, especially ones involving numeric values, admit pseudopolynomial-time algorithms and are therefore easy to solve assuming that the numeric values aren't too large.
For more information on pseudopolynomial time, check out this earlier Stack Overflow question about pseudopolynomial time.
Option Four: Randomized Algorithms
If a problem is NP-hard and P ≠ NP, then there is no deterministic algorithm that can solve that problem in worst-case polynomial time. But what happens if we allow for algorithms that introduce randomness? If we're willing to settle for an algorithm that gives a good answer on expectation, then we can often get relatively good answers to NP-hard problems in not much time.
As an example, consider the maximum cut problem. In this problem, you're given an undirected graph and want to find a way to split the nodes in the graph into two nonempty groups A and B with the maximum number of edges running between the groups. This has some interesting applications in computational physics (unfortunately, I don't understand them at all, but you can peruse this paper for some details about this). This problem is known to be NP-hard, but there's a simple randomized approximation algorithm for it. If you just toss each node into one of the two groups completely at random, you end up with a cut that, on expectation, is within 50% of the optimal solution.
Returning to SAT, many modern SAT solvers use some degree of randomness to guide the search for a satisfying assignment. The WalkSAT and GSAT algorithms, for example, work by picking a random clause that isn't currently satisfied and trying to satisfy it by flipping some variable's truth value. This often guides the search toward a satisfying assignment, causing these algorithms to work well in practice.
It turns out there's a lot of open theoretical problems about the ability to solve NP-hard problems using randomized algorithms. If you're curious, check out the complexity class BPP and the open problem of its relation to NP.
Option Five: Parameterized Algorithms
Some NP-hard problems take in multiple different inputs. For example, the long path problem takes as input a graph and a length k, then asks whether there's a simple path of length k in the graph. The subset sum problem takes in as input a set of numbers and a target number k, then asks whether there's a subset of the numbers that dds up to exactly k.
Interestingly, in the case of the long path problem, there's an algorithm (the color-coding algorithm) whose runtime is O((n3 log n) · bk), where n is the number of nodes, k is the length of the requested path, and b is some constant. This runtime is exponential in k, but is only polynomial in n, the number of nodes. This means that if k is fixed and known in advance, the runtime of the algorithm as a function of the number of nodes is only O(n3 log n), which is quite a nice polynomial. Similarly, in the case of the subset sum problem, there's a dynamic programming algorithm whose runtime is O(nW), where n is the number of elements of the set and W is the maximum weight of those elements. If W is fixed in advance as some constant, then this algorithm will run in time O(n), meaning that it will be possible to exactly solve subset sum in linear time.
Both of these algorithms are examples of parameterized algorithms, algorithms for solving NP-hard problems that split the hardness of the problem into two pieces - a "hard" piece that depends on some input parameter to the problem, and an "easy" piece that scales gracefully with the size of the input. These algorithms can be useful for finding exact solutions to NP-hard problems when the parameter in question is small. The color-coding algorithm mentioned above, for example, has proven quite useful in practice in computational biology.
However, some problems are conjectured to not have any nice parameterized algorithms. Graph coloring, for example, is suspected to not have any efficient parameterized algorithms. In the cases where parameterized algorithms exist, they're often quite efficient, but you can't rely on them for all problems.
For more information on parameterized algorithms, check out this earlier Stack Overflow question.
Option Six: Fast Exponential-Time Algorithms
Exponential-time algorithms don't scale well - their runtimes approach the lifetime of the universe for inputs as small as 100 or 200 elements.
What if you need to solve an NP-hard problem, but you know the input is reasonably small - say, perhaps its size is somewhere between 50 and 70. Standard exponential-time algorithms are probably not going to be fast enough to solve these problems. What if you really do need an exact solution to the problem and the other approaches here won't cut it?
In some cases, there are "optimized" exponential-time algorithms for NP-hard problems. These are algorithms whose runtime is exponential, but not as bad an exponential as the naive solution. For example, a simple exponential-time algorithm for the 3-coloring problem (given a graph, determine if you can color the nodes one of three colors each so that no edge's endpoints are the same color) might work checking each possible way of coloring the nodes in the graph, testing if any of them are 3-colorings. There are 3n possible ways to do this, so in the worst case the runtime of this algorithm will be O(3n · poly(n)) for some small polynomial poly(n). However, using more clever tricks and techniques, it's possible to develop an algorithm for 3-colorability that runs in time O(1.3289n). This is still an exponential-time algorithm, but it's a much faster exponential-time algorithm. For example, 319 is about 109, so if a computer can do one billion operations per second, it can use our initial brute-force algorithm to (roughly speaking) solve 3-colorability in graphs with up to 19 nodes in one second. Using the O((1.3289n)-time exponential algorithm, we could solve instances of up to about 73 nodes in about a second. That's a huge improvement - we've grown the size we can handle in one second by more than a factor of three!
As another famous example, consider the traveling salesman problem. There's an obvious O(n! · poly(n))-time solution to TSP that works by enumerating all permutations of the nodes and testing the paths resulting from those permutations. However, by using a dynamic programming algorithm similar to that used by the color-coding algorithm, it's possible to improve the runtime to "only" O(n2 2n). Given that 13! is about one billion, the naive solution would let you solve TSP for 13-node graphs in roughly a second. For comparison, the DP solution lets you solve TSP on 28-node graphs in about one second.
These fast exponential-time algorithms are often useful for boosting the size of the inputs that can be exactly solved in practice. Of course, they still run in exponential time, so these approaches are typically not useful for solving very large problem instances.
Option Seven: Solve an Easy Special Case
Many problems that are NP-hard in general have restricted special cases that are known to be solvable efficiently. For example, while in general it’s NP-hard to determine whether a graph has a k-coloring, in the specific case of k = 2 this is equivalent to checking whether a graph is bipartite, which can be checked in linear time using a modified depth-first search. Boolean satisfiability is, generally speaking, NP-hard, but it can be solved in polynomial time if you have an input formula with at most two literals per clause, or where the formula is formed from clauses using XOR rather than inclusive-OR, etc. Finding the largest independent set in a graph is generally speaking NP-hard, but if the graph is bipartite this can be done efficiently due to König’s theorem.
As a result, if you find yourself needing to solve what might initially appear to be an NP-hard problem, first check whether the inputs you actually need to solve that problem on have some additional restricted structure. If so, you might be able to find an algorithm that applies to your special case and runs much faster than a solver for the problem in its full generality.
Conclusion
If you need to solve an NP-hard problem, don't despair! There are lots of great options available that might make your intractable problem a lot more approachable. No one of the above techniques works in all cases, but by using some combination of these approaches, it's usually possible to make progress even when confronted with NP-hardness.

An Example for Non-Monotone Worst-Case Complexity

Is somebody aware of a natural program or algorithm that has a non-monotone worst-case behavior?
By non-monotone worst-case behavior I mean that there is a natural number n such that the worst-case runtime for inputs of size n+1 is less than the worst-case runtime for inputs of size n.
Of course, it is easy to construct a program with this behavior. It might even be the case that this happens for small n (like n = 1) in natural programs. But I'm interested in a useful algorithm that is non-monotone for large n.
Is somebody aware of a natural program or algorithm that has a
non-monotone worst-case behavior?
Please define "natural program or algorithm". The concept "algorithm" has a definition I'm aware of, and there are certainly algorithms (as you correctly admit) which have non-monotone worst-case runtime complexity. Is a program "natural" if it does no unecessary work or has minimal runtime complexity for the class of problem it solves? In that case, would you argue that BubbleSort isn't an algorithm? More importantly, I can define a problem the most efficient solution to which has non-monotone worst-case behavior. Would such a problem be "unnatural"? What is your definition of a "natural problem"?
Of course, it is easy to construct a program with this behavior.
Then what's the real question? Until you commit to a definition of natural/useful algorithms and problems, your question has no answer. Are you interested only in pre-existing algorithms which people already use in the real world? If so, you should state that, and the problem becomes one of searching the literature. Frankly, I cannot imagine a reasonable definition of "natural, useful algorithm" which would preclude many examples of algorithms with non-monotone runtime complexity...
But I'm interested in a useful algorithm that is non-monotone for
large n.
Please define "useful algorithm". The concept "algorithm" has a definition I'm aware of, and there are certainly algorithms (as you correctly admit) which have non-monotone worst-case runtime complexity. Is an algorithm "useful" if it correctly solves some problem? I can easily define a problem which can be solved by an algorithm with non-monotone runtime complexity.
Think about a binary search.
When implementing binary search you need to think about the case where the array segment which you're splitting is of odd length. At that point you have 2 choices:
1. Round up/down
2. Check both indexes and make a decision before continuing.
If you choose the first case (lets assume you round down). For odd length arrays where the number you're searching for is the one passed the middle point, you'll have an extra iteration to make.
If that odd array was added one more element it would have saved you that extra iteration.
If you went for the second case, then most executions of the algorithm with more odd iterations then even would require more comparisons then if it was used with an extra element.
Note that all this is very implementation dependent, and so there can't be a real answer without a real algorithm (and moreover a real implementation).
Also all this is based on the assuming you're talking about actual run-time diff and not asymptotic diff. If that's not the case, then the answer would be "no". There is no algorithms with non-monotonic worst case asymptotic running time. That would defy the concept of "worst case".

Algorithms for Big O Analysis

What all algorithms do you people find having amazing (tough, strange) complexity analysis in terms of both - Resulting O notation and uniqueness in way they are analyzed?
I have (quite) a few examples:
The union-find data structure, which supports operations in (amortized) inverse Ackermann time. It's particularly nice because the data structure is incredibly easy to code.
Splay trees, which are self-balancing binary trees (that is, no extra information is stored other than the BST -- no red/black information. Amortized analysis was essentially invented to prove bounds for splay trees; splay trees run in amortized logarithmic time, but worst-case linear time. The proofs are cool.
Fibonacci heaps, which perform most of the priority queue operations in amortized constant time, thus improving the runtime of Dijkstra's algorithm and other problems. As with splay trees, there are slick "potential function" proofs.
Bernard Chazelle's algorithm for computing minimum spanning trees in linear times inverse Ackermann time. The algorithm uses soft heaps, a variant of the traditional priority queue, except that some "corruption" might occur and queries might not be answered correctly.
While on the topic of MSTs: an optimal algorithm has been given by Pettie and Ramachandran, but we don't know the running time!
Lots of randomized algorithms have interested analyses. I'll only mention one example: Delaunay triangulation can be computed in expected O(n log n) time by incrementally adding points; the analysis is apparently intricate, though I haven't seen it.
Algorithms that use "bit tricks" can be neat, e.g. sorting in O(n log log n) time (and linear space) -- that's right, it breaks the O(n log n) barrier by using more than just comparisons.
Cache-oblivious algorithms often have interesting analyses. For example, cache-oblivious priority queues (see page 3) use log log n levels of sizes n, n2/3, n4/9, and so on.
(Static) range-minimum queries on arrays are neat. The standard proof tests your limits with respect to reduction: range-minimum queries is reduced to least common ancestor in trees, which is in turn reduced to a range-minimum queries in a specific kind of arrays. The final step uses a cute trick, too.
Ackermann's function.
This one is kinda simple but Comb Sort blows my mind a little.
http://en.wikipedia.org/wiki/Comb_sort
It is such a simple algorithm for the most part it reads like an overly complicated bubble sort, but it is O(n*Log[n]). I find that mildly impressive.
The plethora of Algorithms for Fast Fourier Transforms are impressive too, the math that proves their validity is trippy and it was fun to try to prove a few on my own.
http://en.wikipedia.org/wiki/Fast_Fourier_transform
I can fairly easily understand the prime radix, multiple prime radix, and mixed radix algorithms but one that works on sets whose size are prime is quite cool.
2D ordered search analysis is quite interesting. You've got a 2-dimensional numeric array of numbers NxN where each row is sorted left-right and each column is sorted top-down. The task is to find a particular number in the array.
The recursive algorithm: pick the element in the middle, compare with the target number, discard a quarter of the array (depending on the result of the comparison), apply recursively to the remainig 3 quarters is quite interesting to analyze.
Non-deterministically polynomial complexity gets my vote, especially with the (admittedly considered unlikely) possibility that it may turn out to be the same as polynomial. In the same vein, anything that can theoretically benefit from quantum computing (N.B. this set is by no means all algorithms).
The other that would get my vote would be common mathematical operations on arbitrary-precision numbers -- this is where you have to consider things like multiplying big numbers is more expensive than multiplying small ones. There is quite a lot of analysis of this in Knuth (which shouldn't be news to anyone). Karatsuba's method is pretty neat: cut the two factors in half by digit (A1;A2)(B1;B2) and multiply A1 B1, A1 B2, A2 B1, A2 B2 separately, and then combine the results. Recurse if desired...
Shell sort. There are tons of variants with various increments, most of which have no benefits except to make the complexity analysis simpler.

Resources