Why are Fibonacci numbers significant in computer science? - algorithm

Fibonacci numbers have become a popular introduction to recursion for Computer Science students and there's a strong argument that they persist within nature. For these reasons, many of us are familiar with them.
They also exist within Computer Science elsewhere too; in surprisingly efficient data structures and algorithms based upon the sequence.
There are two main examples that come to mind:
Fibonacci heaps which have better
amortized running time than binomial
heaps.
Fibonacci search which shares
O(log N) running time with binary
search on an ordered array.
Is there some special property of these numbers that gives them an advantage over other numerical sequences? Is it a spatial quality? What other possible applications could they have?
It seems strange to me as there are many natural number sequences that occur in other recursive problems, but I've never seen a Catalan heap.

The Fibonacci numbers have all sorts of really nice mathematical properties that make them excellent in computer science. Here's a few:
They grow exponentially fast. One interesting data structure in which the Fibonacci series comes up is the AVL tree, a form of self-balancing binary tree. The intuition behind this tree is that each node maintains a balance factor so that the heights of the left and right subtree differ by at most one. Because of this, you can think of the minimum number of nodes necessary to get an AVL tree of height h is defined by a recurrence that looks like N(h + 2) ~= N(h) + N(h + 1), which looks a lot like the Fibonacci series. If you work out the math, you can show that the number of nodes necessary to get an AVL tree of height h is F(h + 2) - 1. Because the Fibonacci series grows exponentially fast, this means that the height of an AVL tree is at most logarithmic in the number of nodes, giving you the O(lg n) lookup time we know and love about balanced binary trees. In fact, if you can bound the size of some structure with a Fibonacci number, you're likely to get an O(lg n) runtime on some operation. This is the real reason that Fibonacci heaps are called Fibonacci heaps - the proof that the number of heaps after a dequeue min involves bounding the number of nodes you can have in a certain depth with a Fibonacci number.
Any number can be written as the sum of unique Fibonacci numbers. This property of the Fibonacci numbers is critical to getting Fibonacci search working at all; if you couldn't add together unique Fibonacci numbers into any possible number, this search wouldn't work. Contrast this with a lot of other series, like 3n or the Catalan numbers. This is also partially why a lot of algorithms like powers of two, I think.
The Fibonacci numbers are efficiently computable. The fact that the series can be generated extremely efficiently (you can get the first n terms in O(n) or any arbitrary term in O(lg n)), then a lot of the algorithms that use them wouldn't be practical. Generating Catalan numbers is pretty computationally tricky, IIRC. On top of this, the Fibonacci numbers have a nice property where, given any two consecutive Fibonacci numbers, let's say F(k) and F(k + 1), we can easily compute the next or previous Fibonacci number by adding the two values (F(k) + F(k + 1) = F(k + 2)) or subtracting them (F(k + 1) - F(k) = F(k - 1)). This property is exploited in several algorithms, in conjunction with property (2), to break apart numbers into the sum of Fibonacci numbers. For example, Fibonacci search uses this to locate values in memory, while a similar algorithm can be used to quickly and efficiently compute logarithms.
They're pedagogically useful. Teaching recursion is tricky, and the Fibonacci series is a great way to introduce it. You can talk about straight recursion, about memoization, or about dynamic programming when introducing the series. Additionally, the amazing closed-form for the Fibonacci numbers is often taught as an exercise in induction or in the analysis of infinite series, and the related matrix equation for Fibonacci numbers is commonly introduced in linear algebra as a motivation behind eigenvectors and eigenvalues. I think that this is one of the reasons that they're so high-profile in introductory classes.
I'm sure there are more reasons than just this, but I'm sure that some of these reasons are the main factors. Hope this helps!

Greatest Common Divisor is another magic; see this for too many magics. But Fibonacci numbers are easy to calculate; also it has a specific name. For example, natural numbers 1,2,3,4,5 have too many logic; all primes are within them; sum of 1..n is computable, each one can produce with other ones, ... but no one take care about them :)
One important thing I forgot about it is Golden Ratio, which has very important impact in real life (for example you like wide monitors :)

If you have an algorithm that can be successfully explained in a simple and concise mannor with understandable examples in CS and nature, what better teaching tool could someone come up with?

Fibonacci sequences are indeed found everywhere in nature/life. They're useful at modeling growth of animal populations, plant cell growth, snowflake shape, plant shape, cryptography, and of course computer science. I've heard it being referred to as the DNA pattern of nature.
Fibonacci heap's have already been mentioned; the number of children of each node in the heap is at most log(n). Also the subtree starting a node with m children is at least (m+2)th fibonacci number.
Torrent like protocols which use a system of nodes and supernodes use a fibonacci to decide when a new super node is needed and how many subnodes it will manage. They do node management based on the fibonacci spiral (golden ratio). See the photo below how nodes are split/merged (partitioned from one large square into smaller ones and vice versa). See photo: http://smartpei.typepad.com/.a/6a00d83451db7969e20115704556bd970b-pi
Some occurences in nature
http://www.mcs.surrey.ac.uk/Personal/R.Knott/Fibonacci/sneezewort.GIF
http://img.blogster.com/view/anacoana/post-uploads/finger.gif
http://jwilson.coe.uga.edu/EMAT6680/Simmons/6690Pictures/pinecone3yellow.gif
http://2.bp.blogspot.com/-X5II-IhjXuU/TVbHrpmRnLI/AAAAAAAAABU/nv73Y9Ylkkw/s320/amazing_fun_featured_2561778790105101600S600x600Q85_200907231856306879.jpg

I don't think there's a definitive answer but one possibility is that the operation of dividing a set S into two partitions S1 and S2 one of which is then divided into to sub-partitions S11 and S12, one of which has the same size as S2 - is a likely approach to many algorithms and that can be sometimes numerically described as a Fibonacci sequence.

Let me add another data structure to yours: Fibonacci trees. They are interesting because the calculation of the next position in the tree can be done by mere addition of the previous nodes:
http://xw2k.nist.gov/dads/html/fibonacciTree.html
It ties well in with the discussion by templatetypedef on AVL-trees (an AVL tree can at worst have fibonacci structure). I've also seen buffers extended in fibonacci-steps rather than powers of two in some cases.

Just to add a trivia about this, Fibonacci numbers describe the breading of rabbits. You start with (1, 1), two rabbits, and then their population grows exponentially .

Their computation as a power of [[0,1],[1,1]] matrix can be considered as the most primitive problem of Operational Research (sort of like Prisoner's Dilemma is the most primitive problem of Game Theory).

Symbols with frequencies that are successive fibonacci numbers create maximum depth huffman trees, which trees correspond to source symbols being encoded with maximum length binary codes. Non-fibonacci source symbol frequencies create more balanced trees, with shorter codes. The code length has direct implications in the description complexity of the finite state machine that is responsible for decoding a given huffman code.
Conjecture: The 1st(fib) image will be compressed to 38bits, while the 2nd(uniform) with 50bits. It seems that the closer your source symbol frequencies are to fibonacci numbers the shorter the final binary sequence, the better the compression, maybe optimal in the huffman model.
Further Reading:
Buro, M. (1993). On the maximum length of Huffman codes. Information
Processing Letters, 45(5), 219-223. doi:10.1016/0020-0190(93)90207-p

For me This is about order and space coordinates.
The Fibonacci sequence can be used as a clock.
The Fibonacci sequence allows to calculate the golden number decimal by decimal.
The golden number multiplied by itself gives almost the golden number +1.
So we can certainly cut an integer into a series of integers, of units by using for example the indexes.
I made a first naive version in python.(poc) code to be updated.
https://gitlab.com/numbers/Numbers/-/blob/main/ranging.py
So we can frame, count and coordinate the calculation steps and the memory spaces to this perfectly periodic reference frame (in time) and thus make it a kind of universal multiplication table equivalent. For me it is explicitly a mapping.
The idea is to eventually propose a ternary code with explicit management of the memory spaces according to the Fibonacci calculation step, and then to find all our numbers there.
Once done, to use this mapping, this universal table, this filter : to check the concordance, the consistency, the periodicity of complex computable operations, such as the wheeler experiment, sinus, gravity etc...
It sounds pretentious when you say it like that. It is not. Nobody create the golden number or Fibonacci. They are here, they are given like fruits on a tree.

Related

Hashing of a Bitstring to Sort by Similarities

Problem description:
We have a lot of bitstrings of the same size. The number and the size of bitstrings is huge. E.g.: 10100101 and 00001111.
Now there is a distance function that just counts the number of same bit positions. In this example: distance is 2 - because the third last and last bits are set by both bitstrings.
-> Now we can make a tour through the bitstrings with the maximal distance, because every bitstring can be converted to a vertex which is connected to all other bitstrings (by the distance function).
Goal
However, this has the complexity of O(N²). My idea is to use a hashing function, that preserves the similarities and than do a simple sort on the hash values. This should result in a near maximum tour. Of course it is not the best result, but it should be a somewhat good result.
Current Problem
My own hashing function rates the left bits higher than the left bits. So that they have a more significant effect to the sort.
Actual Question
Does such an algorithm exists?
Is it possible to use Locality-Sensitive Hashing (LSH) for that aim and if so, can you formulate the respective algorithm. (I didn't understood the algorithm right now)
Thank you, guys!

What sorting techniques can I use when comparing elements is expensive?

Problem
I have an application where I want to sort an array a of elements a0, a1,...,an-1. I have a comparison function cmp(i,j) that compares elements ai and aj and a swap function swap(i,j), that swaps elements ai and aj of the array. In the application, execution of the cmp(i,j) function might be extremely expensive, to the point where one execution of cmp(i,j) takes longer than any other steps in the sort (except for other cmp(i,j) calls, of course) together. You may think of cmp(i,j) as a rather lengthy IO operation.
Please assume for the sake of this question that there is no way to make cmp(i,j) faster. Assume all optimizations that could possibly make cmp(i,j) faster have already been done.
Questions
Is there a sorting algorithm that minimizes the number of calls to cmp(i,j)?
It is possible in my application to write a predicate expensive(i,j) that is true iff a call to cmp(i,j) would take a long time. expensive(i,j) is cheap and expensive(i,j) ∧ expensive(j,k) → expensive(i,k) mostly holds in my current application. This is not guaranteed though.
Would the existance of expensive(i,j) allow for a better algorithm that tries to avoid expensive comparing operations? If yes, can you point me to such an algorithm?
I'd like pointers to further material on this topic.
Example
This is an example that is not entirely unlike the application I have.
Consider a set of possibly large files. In this application the goal is to find duplicate files among them. This essentially boils down to sorting the files by some arbitrary criterium and then traversing them in order, outputting sequences of equal files that were encountered.
Of course reader in large amounts of data is expensive, therefor one can, for instance, only read the first megabyte of each file and calculate a hash function on this data. If the files compare equal, so do the hashes, but the reverse may not hold. Two large file could only differ in one byte near the end.
The implementation of expensive(i,j) in this case is simply a check whether the hashes are equal. If they are, an expensive deep comparison is neccessary.
I'll try to answer each question as best as I can.
Is there a sorting algorithm that minimizes the number of calls to cmp(i,j)?
Traditional sorting methods may have some variation, but in general, there is a mathematical limit to the minimum number of comparisons necessary to sort a list, and most algorithms take advantage of that, since comparisons are often not inexpensive. You could try sorting by something else, or try using a shortcut that may be faster that may approximate the real solution.
Would the existance of expensive(i,j) allow for a better algorithm that tries to avoid expensive comparing operations? If yes, can you point me to such an algorithm?
I don't think you can get around the necessity of doing at least the minimum number of comparisons, but you may be able to change what you compare. If you can compare hashes or subsets of the data instead of the whole thing, that could certainly be helpful. Anything you can do to simplify the comparison operation will make a big difference, but without knowing specific details of the data, it's hard to suggest specific solutions.
I'd like pointers to further material on this topic.
Check these out:
Apparently Donald Knuth's The Art of Computer Programming, Volume 3 has a section on this topic, but I don't have a copy handy.
Wikipedia of course has some insight into the matter.
Sorting an array with minimal number of comparisons
How do I figure out the minimum number of swaps to sort a list in-place?
Limitations of comparison based sorting techniques
The theoretical minimum number of comparisons needed to sort an array of n elements on average is lg (n!), which is about n lg n - n. There's no way to do better than this on average if you're using comparisons to order the elements.
Of the standard O(n log n) comparison-based sorting algorithms, mergesort makes the lowest number of comparisons (just about n lg n, compared with about 1.44 n lg n for quicksort and about n lg n + 2n for heapsort), so it might be a good algorithm to use as a starting point. Typically mergesort is slower than heapsort and quicksort, but that's usually under the assumption that comparisons are fast.
If you do use mergesort, I'd recommend using an adaptive variant of mergesort like natural mergesort so that if the data is mostly sorted, the number of comparisons is closer to linear.
There are a few other options available. If you know for a fact that the data is already mostly sorted, you could use insertion sort or a standard variation of heapsort to try to speed up the sorting. Alternatively, you could use mergesort but use an optimal sorting network as a base case when n is small. This might shave off enough comparisons to give you a noticeable performance boost.
Hope this helps!
A technique called the Schwartzian transform can be used to reduce any sorting problem to that of sorting integers. It requires you to apply a function f to each of your input items, where f(x) < f(y) if and only if x < y.
(Python-oriented answer, when I thought the question was tagged [python])
If you can define a function f such that f(x) < f(y) if and only if x < y, then you can sort using
sort(L, key=f)
Python guarantees that key is called at most once for each element of the iterable you are sorting. This provides support for the Schwartzian transform.
Python 3 does not support specifying a cmp function, only the key parameter. This page provides a way of easily converting any cmp function to a key function.
Is there a sorting algorithm that minimizes the number of calls to cmp(i,j)?
Edit: Ah, sorry. There are algorithms that minimize the number of comparisons (below), but not that I know of for specific elements.
Would the existence of expensive(i,j) allow for a better algorithm that tries to avoid expensive comparing operations? If yes, can you point me to such an algorithm?
Not that I know of, but perhaps you'll find it in these papers below.
I'd like pointers to further material on this topic.
On Optimal and Efficient in Place Merging
Stable Minimum Storage Merging by Symmetric Comparisons
Optimal Stable Merging (this one seems to be O(n log2 n) though
Practical In-Place Mergesort
If you implement any of them, posting them here might be useful for others too! :)
Is there a sorting algorithm that minimizes the number of calls to cmp(i,j)?
Merge insertion algorithm, described in D. Knuth's "The art of computer programming", Vol 3, chapter 5.3.1, uses less comparisons than other comparison-based algorithms. But still it needs O(N log N) comparisons.
Would the existence of expensive(i,j) allow for a better algorithm that tries to avoid expensive comparing operations? If yes, can you point me to such an algorithm?
I think some of existing sorting algorithms may be modified to take into account expensive(i,j) predicate. Let's take the simplest of them - insertion sort. One of its variants, named in Wikipedia as binary insertion sort, uses only O(N log N) comparisons.
It employs a binary search to determine the correct location to insert new elements. We could apply expensive(i,j) predicate after each binary search step to determine if it is cheap to compare the inserted element with "middle" element found in binary search step. If it is expensive we could try the "middle" element's neighbors, then their neighbors, etc. If no cheap comparisons could be found we just return to the "middle" element and perform expensive comparison.
There are several possible optimizations. If predicate and/or cheap comparisons are not so cheap we could roll back to the "middle" element earlier than all other possibilities are tried. Also if move operations cannot be considered as very cheap, we could use some order statistics data structure (like Indexable skiplist) do reduce insertion cost to O(N log N).
This modified insertion sort needs O(N log N) time for data movement, O(N2) predicate computations and cheap comparisons and O(N log N) expensive comparisons in the worst case. But more likely there would be only O(N log N) predicates and cheap comparisons and O(1) expensive comparisons.
Consider a set of possibly large files. In this application the goal is to find duplicate files among them.
If the only goal is to find duplicates, I think sorting (at least comparison sorting) is not necessary. You could just distribute the files between buckets depending on hash value computed for first megabyte of data from each file. If there are more than one file in some bucket, take other 10, 100, 1000, ... megabytes. If still more than one file in some bucket, compare them byte-by-byte. Actually this procedure is similar to radix sort.
Most sorting algorithm out there try minimize the amount of comparisons during sorting.
My advice:
Pick quick-sort as a base algorithm and memorize results of comparisons just in case you happen to compare the same problems again. This should help you in the O(N^2) worst case of quick-sort. Bear in mind that this will make you use O(N^2) memory.
Now if you are really adventurous you could try the Dual-Pivot quick-sort.
Something to keep in mind is that if you are continuously sorting the list with new additions, and the comparison between two elements is guaranteed to never change, you can memoize the comparison operation which will lead to a performance increase. In most cases this won't be applicable, unfortunately.
We can look at your problem in the another direction, Seems your problem is IO related, then you can use advantage of parallel sorting algorithms, In fact you can run many many threads to run comparison on files, then sort them by one of a best known parallel algorithms like Sample sort algorithm.
Quicksort and mergesort are the fastest possible sorting algorithm, unless you have some additional information about the elements you want to sort. They will need O(n log(n)) comparisons, where n is the size of your array.
It is mathematically proved that any generic sorting algorithm cannot be more efficient than that.
If you want to make the procedure faster, you might consider adding some metadata to accelerate the computation (can't be more precise unless you are, too).
If you know something stronger, such as the existence of a maximum and a minimum, you can use faster sorting algorithms, such as radix sort or bucket sort.
You can look for all the mentioned algorithms on wikipedia.
As far as I know, you can't benefit from the expensive relationship. Even if you know that, you still need to perform such comparisons. As I said, you'd better try and cache some results.
EDIT I took some time to think about it, and I came up with a slightly customized solution, that I think will make the minimum possible amount of expensive comparisons, but totally disregards the overall number of comparisons. It will make at most (n-m)*log(k) expensive comparisons, where
n is the size of the input vector
m is the number of distinct component which are easy to compare between each other
k is the maximum number of elements which are hard to compare and have consecutive ranks.
Here is the description of the algorithm. It's worth nothing saying that it will perform much worse than a simple merge sort, unless m is big and k is little. The total running time is O[n^4 + E(n-m)log(k)], where E is the cost of an expensive comparison (I assumed E >> n, to prevent it from being wiped out from the asymptotic notation. That n^4 can probably be further reduced, at least in the mean case.
EDIT The file I posted contained some errors. While trying it, I also fixed them (I overlooked the pseudocode for insert_sorted function, but the idea was correct. I made a Java program that sorts a vector of integers, with delays added as you described. Even if I was skeptical, it actually does better than mergesort, if the delay is significant (I used 1s delay agains integer comparison, which usually takes nanoseconds to execute)

What is the most efficient way of finding the first element of the ith row when A[i,j]=j*(A[i-1,j+1]-A[i-1,j])?

When the first row is 1, 1/2 , 1/3 ....
Here's an image to support the question.
Does there exist a more efficient approach than the naive O(n^2) approach?
I came across this when studying Bernoulli numbers and then consequently on reaching "Akiyama–Tanigawa algorithm".
One of the ways could be simple precomputing the results and storing them in a table. Since Bernoulli numbers grow very quickly, for most practical purposes we wouldn't need Bernoulli numbers for much larger n. Consider Bernoulli(400)- its around -(10^550).
But looking at it only algorithmically, is there a better approach than the O(n^2) one?
The first elements form the sequence of Bernoulli numbers. The numerators and denominators for the Bernoulli numbers are found using the A027641 sequence and A027642 sequence, respectively. Both of those sequences have closed-form sums on their respective pages that can be used to compute their terms.

Algorithms with superexponential runtime?

I was talking with a student the other day about the common complexity classes of algorithms, like O(n), O(nk), O(n lg n), O(2n), O(n!), etc. I was trying to come up with an example of a problem for which solutions whose best known runtime is super-exponential, such as O(22n), but still decidable (e.g. not the halting problem!) The only example I know of is satisfiability of Presburger arithmetic, which I don't think any intro CS students would really understand or be able to relate to.
My question is whether there is a well-known problem whose best known solution has runtime that is superexponential; at least ω(n!) or ω(nn). I would really hope that there is some "reasonable" problem meeting this description, but I'm not aware of any.
Maximum Parsimony is the problem of finding an evolutionary tree connecting n DNA sequences (representing species) that requires the fewest single-nucleotide mutations. The n given sequences are constrained to appear at the leaves; the tree topology and the sequences at internal nodes are what we get to choose.
In more CS terms: We are given a bunch of length-k strings that must appear at the leaves of some tree, and we have to choose a tree, plus a length-k string for each internal node in the tree, so as to minimise the sum of Hamming distances across all edges.
When a fixed tree is also given, the optimal assignment of sequences to internal nodes can be determined very efficiently using the Fitch algorithm. But in the usual case, a tree is not given (i.e. we are asked to find the optimal tree), and this makes the problem NP-hard, meaning that every tree must in principle be tried. Even though an evolutionary tree has a root (representing the hypothetical ancestor), we only need to consider distinct unrooted trees, since the minimum number of mutations required is not affected by the position of the root. For n species there are 3 * 5 * 7 * ... * (2n-5) leaf-labelled unrooted binary trees. (There is just one such tree with 3 species, which has a single internal vertex and 3 edges; the 4th species can be inserted at any of the 3 edges to produce a distinct 5-edge tree; the 5th species can be inserted at any of these 5 edges, and so on -- this process generates all trees exactly once.) This is sometimes written (2n-5)!!, with !! meaning "double factorial".
In practice, branch and bound is used, and on most real datasets this manages to avoid evaluating most trees. But highly "non-treelike" random data requires all, or almost all (2n-5)!! trees to be examined -- since in this case many trees have nearly equal minimum mutation counts.
Showing all permutation of string of length n is n!, finding Hamiltonian cycle is n!, minimum graph coloring, ....
Edit: even faster Ackerman functions. In fact they seems without bound function.
A(x,y) = y+1 (if x = 0)
A(x,y) = A(x-1,1) (if y=0)
A(x,y) = A(x-1, A(x,y-1)) otherwise.
from wiki:
A(4,3) = 2^2^65536,...
Do algorithms to compute real numbers to a certain precision count? The formula for the area of the Mandelbrot set converges extremely slowly; 10118 terms for two digits, 101181 terms for three.
This is not a practical everyday problem, but it's a way to construct relatively straightforward problems of increasing complexity.
The Kolmogorov complexity K(x) is the size of the smallest program that outputs the string $x$ on a pre-determined universal computer U. It's easy to show that most strings cannot be compressed at all (since there are more strings of length n than programs of length n).
If we give U a maximum running time (say some polynomial function P), we get a time-bounded Kolmogorov complexity. The same counting argument holds: there are some strings that are incompressible under this time bounded Kolmogorov complexity. Let's call the first such string (of some length n) xP
Since the time-bounded Kolmogorov complexity is computable, we can test all strings, and find xP
Finding xP can't be done in polynomial time, or we could use this algorithm to compress it, so finding it must be a super-polynomial problem. We do know we can find it in exp(P) time, though. (Jumping over some technical details here)
So now we have a time-bound E = exp(P). We can repeat the procedure to find xE, and so on.
This approach gives us a decidable super-F problem for every time-constructible function F: find the first string of length n (some large constant) that is incompressible under time-bound F.

Algorithms for Big O Analysis

What all algorithms do you people find having amazing (tough, strange) complexity analysis in terms of both - Resulting O notation and uniqueness in way they are analyzed?
I have (quite) a few examples:
The union-find data structure, which supports operations in (amortized) inverse Ackermann time. It's particularly nice because the data structure is incredibly easy to code.
Splay trees, which are self-balancing binary trees (that is, no extra information is stored other than the BST -- no red/black information. Amortized analysis was essentially invented to prove bounds for splay trees; splay trees run in amortized logarithmic time, but worst-case linear time. The proofs are cool.
Fibonacci heaps, which perform most of the priority queue operations in amortized constant time, thus improving the runtime of Dijkstra's algorithm and other problems. As with splay trees, there are slick "potential function" proofs.
Bernard Chazelle's algorithm for computing minimum spanning trees in linear times inverse Ackermann time. The algorithm uses soft heaps, a variant of the traditional priority queue, except that some "corruption" might occur and queries might not be answered correctly.
While on the topic of MSTs: an optimal algorithm has been given by Pettie and Ramachandran, but we don't know the running time!
Lots of randomized algorithms have interested analyses. I'll only mention one example: Delaunay triangulation can be computed in expected O(n log n) time by incrementally adding points; the analysis is apparently intricate, though I haven't seen it.
Algorithms that use "bit tricks" can be neat, e.g. sorting in O(n log log n) time (and linear space) -- that's right, it breaks the O(n log n) barrier by using more than just comparisons.
Cache-oblivious algorithms often have interesting analyses. For example, cache-oblivious priority queues (see page 3) use log log n levels of sizes n, n2/3, n4/9, and so on.
(Static) range-minimum queries on arrays are neat. The standard proof tests your limits with respect to reduction: range-minimum queries is reduced to least common ancestor in trees, which is in turn reduced to a range-minimum queries in a specific kind of arrays. The final step uses a cute trick, too.
Ackermann's function.
This one is kinda simple but Comb Sort blows my mind a little.
http://en.wikipedia.org/wiki/Comb_sort
It is such a simple algorithm for the most part it reads like an overly complicated bubble sort, but it is O(n*Log[n]). I find that mildly impressive.
The plethora of Algorithms for Fast Fourier Transforms are impressive too, the math that proves their validity is trippy and it was fun to try to prove a few on my own.
http://en.wikipedia.org/wiki/Fast_Fourier_transform
I can fairly easily understand the prime radix, multiple prime radix, and mixed radix algorithms but one that works on sets whose size are prime is quite cool.
2D ordered search analysis is quite interesting. You've got a 2-dimensional numeric array of numbers NxN where each row is sorted left-right and each column is sorted top-down. The task is to find a particular number in the array.
The recursive algorithm: pick the element in the middle, compare with the target number, discard a quarter of the array (depending on the result of the comparison), apply recursively to the remainig 3 quarters is quite interesting to analyze.
Non-deterministically polynomial complexity gets my vote, especially with the (admittedly considered unlikely) possibility that it may turn out to be the same as polynomial. In the same vein, anything that can theoretically benefit from quantum computing (N.B. this set is by no means all algorithms).
The other that would get my vote would be common mathematical operations on arbitrary-precision numbers -- this is where you have to consider things like multiplying big numbers is more expensive than multiplying small ones. There is quite a lot of analysis of this in Knuth (which shouldn't be news to anyone). Karatsuba's method is pretty neat: cut the two factors in half by digit (A1;A2)(B1;B2) and multiply A1 B1, A1 B2, A2 B1, A2 B2 separately, and then combine the results. Recurse if desired...
Shell sort. There are tons of variants with various increments, most of which have no benefits except to make the complexity analysis simpler.

Resources