What algorithm is more efficient in computing eigenvectors? - performance

I need to find all eigenpairs of a symmetric real matrix. Quick search shows that there are 5 main algorithms doing that:
Jacobi rotations. Every researcher says it's ineffective.
QR. My textbook says it's the best way.
Divide-and-conquer. Wikipedia says it's better than QR.
MRRR. Never ever heard of it. Any paper seems incomprehensible to me. However, it's newer than DnC, so it must be faster.
Homotopy method. Don't know anything on that.
2, 3 and 4 require the said matrix to be first brought to Hessenberg form. My question is, which one of those is actually the best? I failed to find a paper with a desired comparsion. Moreover, some papers are behind a paywall. Behind that, I tried to read LAPACK code, but this seems impossible as it is optimised as hell. I need your help!

Related

Proof and explanation of the base 2 integer sqare root algorithm

While looking for a reasonably fast algorithm to calculate a square root of a number up to n digits I have stumbled upon this algorithm:
https://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Binary_numeral_system_(base_2)
I must admit, this is a beautiful piece of code, but the explanation provided on wikipedia doesn't really speak to me. I have tried to understand it for several hours now and I simply have no idea how does it work - I have done some example calculations on paper but It didn't seem to help.
So, that's why I ask this question here, an explanation would be something really useful.
Also, if this method is supposed to be much quicker, why It is not the one used in standard C library?

When would it be important to be able to calculate order of growth?

I'm reading the chapter 2 and 3 of CLRS, and get stuck so often, especially in the problems provided at the end of each chapter, that I wonder if it'll ever be worthwhile for this much effort. I can't understand the solution online like this one: http://clrs.skanev.com/02/problems/01.html
I heard that this book is one of the most popular text books for university CS class, but do people skip intricate parts and just memorize important things, like insertion sort has this order of growth and merge sort has that order of growth, and go ahead?
Isn't it just enough to be familiar with many useful algorithms to have about as much understanding of computer science as people with a degree in CS do in general?
Understanding isn't about memorization. It's about being able to apply the knowledge to solve problems. The textbook problems are quite simple compared to most real-life problems. So, skipping these simply means you're not learning at all, and you certainly won't be able to apply any of it in real life. You're memorizing, but you can't use what you've memorized.
TL;DR: The proof of being able to use the knowledge is the ability to solve problems, and textbook problems are simple.‡ One doesn't go without the other.
‡ Knuth's texts are a notable exception: he also offers some borderline intractable problems, and everything in between :)
The point is that "people with a degree in CS ... in general" can work out the order of growth of an algorithm. That's why people go to the effort of learning this stuff. If you just want to be able to say "mergesort is O(n log n)", then indeed, all you need is to see and memorise that fact. If you want to be able to work out the O() of an algorithm, even when it's one you've never seen before - then you need these methods.

Recommendations for Fast Multipole Method implementation?

I'm interested in implementing the Fast Multipole Method to efficiently simulate a system of repulsive particles.
I've found a large collection of references discussing FMM, but none seem very approachable for non-mathematicians who want to fully understand the algorithm.
Can you recommend a ground-up reference that clearly explains the mathematics behind the process, and includes pseudocode exemplifying a proper implementation?
I am by no means an expert in FMM, but this java implementation and introduction is the best source I've found so far for explaining it carefully and slowly. The paper is good at defining terms before using them, and the code at least is useful as a reference point. The math still gets hairy very quickly, but it is what it is :)
A pedestrian introduction to fast multipole methods is a close second. It doesn't explain the actual details of a working FMM implementation, but it's a good introduction to the basic ideas.
I like the short course on FMM. In begins with FMM in 1D, than it uses theory of complex variable to do FMM in 2D. And than there is the crazy 3D version which uses theory of spherical harmonics functions, which I guess can be very difficult for non-mathematician. But If you need FMM only in 2D you should be fine.
Unfortunately no pseudo codes are given there.
But do you really need the accuracy of FMM?. You might be fine with Barnes-Hut's algorithm
After running into a similar issue to you, I ended up writing a fully-documented Python fast multipole method implementation, pybbfmm. I've also written a short, mathematics-free tutorial on how the method works. Together, I think they're substantially more accessible than any of the other presentations I could find.
(meta: Although this is effectively a linkpost, the OP is explicitly asking for a link. I've added what I think was missing from the last one - the name fo the library - but I'm not sure how else to offer this answer except as a name and a link. Certainly it doesn't feel any more linkpost-y than the accepted answer. If this one gets deleted as well, I'll give up)

Relating NP-Complete problems to real world problems

I have a decent grasp of NP Complete problems; that's not the issue. What I don't have is a good sense of where they turn up in "real" programming. Some (like knapsack and traveling salesman) are obvious, but others don't seem obviously connected to "real" problems.
I've had the experience several times of struggling with a difficult problem only to realize it is a well known NP Complete problem that has been researched extensively. If I had recognized the connection more quickly I could have saved quite a bit of time researching existing solutions to my specific problem.
Are there any resources (online or print) that specifically connect NP Complete to real world instances?
Edit:
For example, I was working on a program that tried to divide students into groups based on age, grade, and school of origin, which is essentially a graph partitioning problem. It took me a while to realize the connection.
I have found that Computers and Intractability is the definitive reference on this topic.
Usually the connection you are talking about must be extracted with a so-called reduction, for example you reduce 3-SAT to the problem you are working with and then you can conclude that your problem has the same complexity of it.
This passage is not trivial, since you have to prove that you can turn every problem instance l of a known NP-Hard problem L into an instance c of your problem C using a deterministic polinomyal algorithms.
So, except from learning basical correlations of common NP-Hard problems using your memory, there's no way to be sure if a problem is similar to another NP-Hard without first trying to guessing and then proving it, you have to be smart.
here is a wiki link:
http://wapedia.mobi/en/List_of_NP-complete_problems
Notice it says
This list is in no way comprehensive (there are more than 3000 known NP-complete problems)
probably it would be a great task if anyone could compile such list.
A theorist should try to understand/proof an NP-Complete/Hard problem. But, a programmer doesn't have that time to. He needs a list.
Am I correct?
I think you should google it. And, read through all the links. Add any new problem found in the link to your list.
Hope it helps
PS : Don't forget to post the list when you're finished :P
For developing better intuition the book "The Algorithm Design Manual, Second Edition" by Skiena (excerpts on google books) is simply great.
List in the back with problems
(including hard problems), that
include an illustration and a
discussion (often) with a real world
example.
Covers both the theoretical
and practical side of things, often
talking about actual code.
Read excepts online here (see some examples in chapters 14):
http://books.google.dk/books?id=7XUSn0IKQEgC&printsec=frontcover#v=onepage&q&f=false
Chapter 16 (not online) discusses some hard problems, including graph partition.

How to cultivate algorithm intuition?

When faced with a problem in software I usually see a solution right away. Of course, what I see is usually somewhat off, and I always need to sit down and design (admittedly, I usually don't design enough), but I get a certain intuition right away.
My problem is I don't get that same intuition when it comes to advanced algorithms. I feel much more up to the task of building another Facebook then building another Google search, or a Music Genom project. It's probably because I've been building software for quite some time, but I have little experience with composing algorithms.
I would like the community's advice on what to read and what projects to undertake to be better at composing algorithms.
(This question has nothing to do with Algorithmic composition. Well, almost nothing)
+1 To whoever said experience is the best teacher.
There are several online portals which have a lot of programming problems, that you can submit your own solutions to, and get an automated pass/fail indication.
http://www.spoj.pl/
http://uva.onlinejudge.org/
http://www.topcoder.com/tc
http://code.google.com/codejam/contests.html
http://projecteuler.net/
https://codeforces.com
https://leetcode.com
The USACO training site is the training program that all USA computing olympiad participants go through. It goes step by step, introducing more and more complex algorithms as you go.
You might find it helpful to perform algorithms physically. For example, when you're studying sorting algorithms, practice doing each one with a deck of cards. That will activate different parts of your brain than reading or programming alone will.
Steve Yegge referred to "The Algorithm Design Manual" in one of his rants. I haven't seen it myself, but it sounds like it's just the ticket from his description.
My absolute favorite for this kind of interview preparation is Steven Skiena's The Algorithm Design Manual. More than any other book it helped me understand just how astonishingly commonplace (and important) graph problems are – they should be part of every working programmer's toolkit. The book also covers basic data structures and sorting algorithms, which is a nice bonus. But the gold mine is the second half of the book, which is a sort of encyclopedia of 1-pagers on zillions of useful problems and various ways to solve them, without too much detail. Almost every 1-pager has a simple picture, making it easy to remember. This is a great way to learn how to identify hundreds of problem types.
problem domain
First you must understand the problem domain. An elegant solution to the wrong problem is no good, nor is an inefficient solution to the right problem in most cases. Solution quality, in other words, is often relative. A simple scheduling problem that has a deterministic solution that takes ten minutes to run may be fine if schedules are realculated once per week, but if schedules change several times a day then a genetic algorithm solution that converges in a few seconds may be required.
decomposition and mapping
Second, decompose the problem into sub-problems and known/unknown elements that correspond to elements of the solution. Sometimes this is obvious, e.g. to count widgets you need a way of identifying widgets, an incrementable counter, and a way of storing the count. Sometimes it is not so obvious. Sometimes you have to decompose the problem, the domain, and possible solutions at the same time and try several different mappings between them to find one that leads to the correct results [this is the general method].
model
Model the solution, in your head at least, and walk through it to see if it works correctly. Adjust as necessary (See decomposition and mapping, above).
composition/interfaces
Many times you can find elements of the problem and elements of the solution that map to each other and produce partial results that are useful. This composition and interface construction provides the kernal of the solution, and also serves to reduce the scope of the problem remaining. So then you just loop back to the top with a smaller initial problem, and go through it again.
experience
Experience is the best teacher, of course, but reading about different kinds of problems and solutions will also be helpful. Studying some of the well-known algorithms and their applications is likewise very helpful, e.g. Dijkstra, Bresenham, Unification, and of course, graph theory.
I am not sure intuition can be cultivated, but I think I know what you are asking. The more problems you solve, the more information and experience you have at your disposal for future problems. So, I say just practice. Practice programming real world applications and you run into plenty of problems. Sometimes, solving puzzles can be very educational as well.
I try to find physical analogues when I'm looking at a complex problem.

Resources