Relating NP-Complete problems to real world problems - algorithm

I have a decent grasp of NP Complete problems; that's not the issue. What I don't have is a good sense of where they turn up in "real" programming. Some (like knapsack and traveling salesman) are obvious, but others don't seem obviously connected to "real" problems.
I've had the experience several times of struggling with a difficult problem only to realize it is a well known NP Complete problem that has been researched extensively. If I had recognized the connection more quickly I could have saved quite a bit of time researching existing solutions to my specific problem.
Are there any resources (online or print) that specifically connect NP Complete to real world instances?
Edit:
For example, I was working on a program that tried to divide students into groups based on age, grade, and school of origin, which is essentially a graph partitioning problem. It took me a while to realize the connection.

I have found that Computers and Intractability is the definitive reference on this topic.

Usually the connection you are talking about must be extracted with a so-called reduction, for example you reduce 3-SAT to the problem you are working with and then you can conclude that your problem has the same complexity of it.
This passage is not trivial, since you have to prove that you can turn every problem instance l of a known NP-Hard problem L into an instance c of your problem C using a deterministic polinomyal algorithms.
So, except from learning basical correlations of common NP-Hard problems using your memory, there's no way to be sure if a problem is similar to another NP-Hard without first trying to guessing and then proving it, you have to be smart.

here is a wiki link:
http://wapedia.mobi/en/List_of_NP-complete_problems
Notice it says
This list is in no way comprehensive (there are more than 3000 known NP-complete problems)
probably it would be a great task if anyone could compile such list.
A theorist should try to understand/proof an NP-Complete/Hard problem. But, a programmer doesn't have that time to. He needs a list.
Am I correct?
I think you should google it. And, read through all the links. Add any new problem found in the link to your list.
Hope it helps
PS : Don't forget to post the list when you're finished :P

For developing better intuition the book "The Algorithm Design Manual, Second Edition" by Skiena (excerpts on google books) is simply great.
List in the back with problems
(including hard problems), that
include an illustration and a
discussion (often) with a real world
example.
Covers both the theoretical
and practical side of things, often
talking about actual code.
Read excepts online here (see some examples in chapters 14):
http://books.google.dk/books?id=7XUSn0IKQEgC&printsec=frontcover#v=onepage&q&f=false
Chapter 16 (not online) discusses some hard problems, including graph partition.

Related

Understanding algorithm design techniques in depth

"Designing the right algorithm for a given application is a difficult job. It requires a major creative act, taking a problem and pulling a solution out of the ether. This is much more difficult than taking someone else's idea and modifying it or tweaking it to make it a little better. The space of choices you can make in algorithm design is enormous, enough to leave you plenty of freedom to hang yourself".
I have studied several basic design techniques of algorithms like Divide and Conquer, Dynamic Programming, greedy, backtracking etc.
But i always fail to recognize what principles to apply when i come across certain programming problems. I want to master the designing of algorithms.
So can any one suggest a best place to understand the principles of algorithm design in depth.....
I suggest Programming Pearls, 2nd edition, by Jon Bentley. He talks a lot about algorithm design techniques and provides examples of real world problems, how they were solved, and how different algorithms affected the runtime.
Throughout the book, you learn algorithm design techniques, program verification methods to ensure your algorithms are correct, and you also learn a little bit about data structures. It's a very good book and I recommend it to anyone who wants to master algorithms. Go read the reviews in amazon: http://www.amazon.com/Programming-Pearls-2nd-Edition-Bentley/dp/0201657880
You can have a look at some of the book's contents here: http://netlib.bell-labs.com/cm/cs/pearls/
Enjoy!
You can't learn algorithm design just from reading books. Certainly, books can help. Books like Programming Pearls as suggested in another answer are great because they give you problems to work. Each problem forces you to think about how to solve a particular type of problem.
The idea is that you expose yourself to many different types of problems and their solutions. In doing so, you learn how to examine a problem and see if it shares anything in common with problems you've already seen. In that regard, it's not a whole lot different than the way you learned how to solve "word problems" in math class. Granted, most algorithm problems are more complex than having to figure out where on the tracks the two trains will collide, but the way you learn how to solve the problems is the same. You learn common techniques used to solve simple problems, then combine those techniques to solve more complex problems, etc.
Read, practice, lather, rinse, repeat.
In addition to books like Programming Pearls, there are sites online that post different programming challenges that you can test yourself on. It helps if you have friends or co-workers who also are interested in algorithms, because you can bounce ideas off each other and pose interesting challenges, or work together to come up with solutions to problems.
Did I mention that it takes practice?
"Mastering" anything takes time. A long time. A popular theory is that it takes 10,000 hours of practice to become an expert at anything. There's some dispute about that for particular endeavors, but in general it's true. You don't master anything overnight. You have to study. And practice. And read what others have done. Study some more and practice some more.
A good book about algorithm design is Kleinbeg Tardos. Every design technique depends on the problem that you are going to tackle. It is very important to do the exercises in the algorithm books and have feedback from teachers about that.
If there exist a locally optimal choice taht brings the globally optimal solution you can use a greedy algorithm.
If the problem has optimal substructure, you can use dynamic programming.

Estimating difficulty of instances of NP problems

NP problems look like they are suitable for use as trapdoor functions or proofs of work, since they are difficult to solve, but easy to verify. Unfortunately, they seem a little hard to use in adversarial settings where an opponent can control problem selection because while worst-case problems are NP, particular instances can be solved very quickly.
So: is there any algorithm which can take instances and estimate - more efficiently than trying to solve them - how hard or close to worst-case they are?
(The context is musing about a Bitcoin protocol where the proofs-of-work were reusable and not useless hash checks. The obvious approach is to have a central authority issue, for each transaction block, a NP instance which corresponds to a real-world problem. But the central authority could be subverted, and start issuing easy problems which would render the network vulnerable to double-spends. One could accept problems from multiple authorities, or anyone, but the chosen-easy problem remains. If there were some way to estimate the difficulty of any problem presented to the network, then 'too easy' problems could simply be ignored, falling back to the hash race if necessary.)
EDIT: jaxtr links me to "Predicting Satisfiability at the Phase Transition", which gives algorithms which estimate hardness at 70% accuracy - but they don't seem to investigate whether the algorithm can be deliberately fooled. (As well, one can apparently generate SAT problems with specified probabilities of being satisfiable.)
This is the same problem faced by researchers trying to create public-key encryption algorithms based on np-completeness. As far as I know, there are some stabs at it, but it's still an open problem. See the discussion here: Are there public key cryptography algorithms that are provably NP-hard to defeat?
I know I've seen more recent work, but can't find it offhand. I recall a book composed of articles about alternative cryptosystems should factorization suddenly become cheap, and I'll try to dig up the link.
Edit: The comment below points to the book I was thinking of. The website has lots of good references to various relevant papers. See the "code based" section in particular.

how to get started with TopCoder to update/develop algorithm skills?

at workplace, the work I do is hardly near to challenging and doing that I think I might be losing the skills to look at a completely new problem and think about different ideas to solve it.
A friend suggested TopCoder.com to me, but looking at the overwhelming number of problems I can not decide how to get started?
what I want is to sharpen my techniques ( not particular language or framework ).
The only way to get started would be to pick problems. Division I is the more difficult division, so you will probably find that the division I medium and hard problems will be somewhat interesting and challenging (unless you are quite clever.)
If you check the event calendar, you can see what algorithm competition rounds are coming up in your time zone. The competitions have the added virtue of forcing you to read and analyze other people's code in the challenge phase, so even if you would just as soon practice without a clock, you may find them interesting.
TopCoder algorithm contests are a way to develop your coding speed. Solving any of the problems in the practice arena is difficult unless you already have knowledge of various algorithms.
The problems on Project Euler suffer from the same flaw. You already have to know the algorithms to solve the problems in a reasonable time frame.
What I would suggest is to pick a project that you're interested in, and pursue it as you have time. As an example, I'm currently learning how to work with the open street map tiles in an Eclipse rich client platform.
Try whit http://projecteuler.net Problems difficulty can be assumed by number of solvers.
I prefer this page, because it is language invariant and problems are really challenging
You need the experience of solving 2 problems in any online judge (like http://www.spoj.com, http://www.lightoj.com, http://www.codeforces.com) in any programming language of your choice. That will give you an idea about how are your programs tested online.
Then you can follow this -> http://localboyfrommadurai.blogspot.in/2011/12/new-to-topcoder.html

Improve algorithmic thinking [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I was thinking about ways to improve my ability to find algorithmic solutions to a problem.I have thought of solving math problems from various math sectors such as discrete mathematics or linear algebra.After "googling" a bit I have read an article that claimed the need of learning game programming in order to achieve this and it seems logical to me.
Do you have/had the same concerns as me or do you have any ideas on this?I am looking forward to hear them.
Thank you all, in advance.
P.S.1:I want to say that I already know about programming and how to program(although I am at amateur level:-) ) and I just want to improve at the specific issue, NOT to start learning it
P.S.2:I think that its a useful topic for future reference so I checked the community wiki box.
Solve problems on a daily basis. Watch traffic lights and ask yourself, "How can these be synced to optimize the flow of traffic? Or to optimize the flow of pedestrians? What is the best solution for both?". Look at elevators and ask yourself "Why should these elevators use different rules than the elevators in that other building I visited yesterday? How is it actually implemented? How can it be improved?".
Try to see a problem everywhere, even if it is solved already. Reflect on the solution. Ask yourself why your own superior solution probably isn't as good as the one you can see - what are you missing?
And so on. Every day. All of the time.
The idea is that almost everything can be viewed as an algorithm (a goal that has some kind of meaning to somebody, and a method with which to achieve it). Try to have that in mind next time you watch a gameshow on TV, or when you read the news coverage of the latest bank robbery. Ask yourself "What is the goal?", "Whose goal is it?" and "What is the method?".
It can easily be mistaken for critical thinking but is more about questioning your own solutions, rather than the solutions you try to understand and improve.
First of all, and most important: practice. Think of solutions to everything everytime. It doesn't have to be on your computer, programming. All algorithms will do great. Like this: when you used to trade cards, how did you compare your deck and your friend's to determine the best way for both of you to trade? How can you define how many trades can you do to do the maximum and yet don't get any repeated card?
Use problem databases and online judges like this site, http://uva.onlinejudge.org/index.php, that has hundreds of problems concerning general algorithms. And you don't need to be an expert programmer at all to solve any of them. What you need is a good ability with logic and math. There, you can find problems from the simplest ones to the most challenging. Most of them come from Programming Marathons.
You can, then, implement them in C, C++, Java or Pascal and submit them to the online judge. If you have a good algorithm, it will be accepted. Else, the judge will say your algorithm gave the wrong answer to the problem, or it took too long to solve.
Reading about algorithms helps, but don't waste too much time on it... Reading won't help as much as trying to solve the problems by yourself. Maybe you can read the problem, try to figure out a solution for yourself, compare with the solution proposed by the source and see what you missed. Don't try to memorize them. If you have the concept well learned, you can implement it anywhere. Understanding is the hardest part for most of them.
Polya's "How To Solve It" is a great book for thinking about how to solve mathematical problems and do proofs, and I'd recommend it for anyone who does problem solving.
But! It doesn't really address the excitement that happens when the real world provides input to your system, via channel noise, user wackiness, other programs grabbing resources, etc. For that it is worth looking at algorithms that get applied to real-world input (obligatory and deserved nod to Knuth's collection), and systems which are fairly robust in the face of same (TCP, kernel internals). Part of coming up with good algorithmic solutions is to know what already exists.
And alongside reading all that, of course practice practice practice.
You should check out Mathematics and Plausible Reasoning by G. Polya. It is a rare math book, which actually deals with the thought process involved in making mathematical discoveries. I think it is the same thought process that is involved in coming up with algorithms.
The saying "practice makes perfect" definitely applies. I'm tutoring a friend of mine in programming, and I remind him that "if you don't know how to ride a bike, you could read every book about it but it doesn't mean you'll be better than Lance Armstrong tomorrow - you have to practice".
In your case, how about trying the problems in Project Euler? http://projecteuler.net
There are a ton of problems there, and for each one you could practice at developing an algorithm. Once you get a good-enough implementation, you can access other people's solutions (for a particular problem) and see how others have done it. Don't think of it as math problems, but rather as problems in creating algorithms for solving math problems.
In university, I actually took a class in algorithm design and analysis, and there is definitely a lot of theory behind it. You may hear people talking about "big-O" complexity and stuff like that - there are quite a lot of different properties about algorithms themselves which can lead to greater understanding of what constitutes a "good" algorithm. You can study quite a bit in this regard as well for the long-term.
Check some online judges, TopCoder (algorithm tutorials). Take some algorithms book (CLRS, Skiena) and do harder exercises. Practice much.
I would suggest this path for you :
1.First learn elementary parts of a language.
2.Then learn about some basic maths.
3.Move to topcoder div2 easy problems.Usually if you cannot score 250 pts. in any given day,then it means you need a lot of practise,keep practising.
4.Now's the time to learn some tools of a programmer,take a good book like Algorithm Design Manual by Steven Skienna and learn about dynamic programming and greedy approach.
5.Now move to marathons,don't be discouraged if you cannot solve it quickly.Improvement will not happen overnight,you will have to patiently keep on working hard.
6.Continue step 5 from now on and you will be a better programmer.
Learning about game programming will probably lead you to good algorithms for game programming, but not necessarily to better algorithms in general.
It's a good start, but I think that the best way to learn and apply algorithmic knowledge is
Learn about good algorithms that currently exist for your area of interest
Expand your knowledge by viewing other areas; for example, what kinds of algorithms are
required when working on genetic analysis? What's the best approach for determining
run-off potential as it relates to flooding?
Read about problems in other domains and attempt to use the algorithms that you're
familiar with to see if they fit. If they don't try to break the problem down and see if
you can come up with your own algorithm.
A few more books worth reading (in no particular order):
Aha! Insight (Martin Gardner)
Any of the Programming Pearls books (Jon Bentley)
Concrete Mathematics (Graham, Knuth, and Patashnik)
A Mathematical Theory of Communication (Claude Shannon)
Of course, most of those are just samples -- other books and papers by the same authors are usually quite good as well (e.g. Shannon wrote a lot that's well worth reading, and far too few people give it the attention it deserves).
Read SICP / Structure and Interpretation of Computer Programs and work all the problems; then read The Art of Computer Programming (all volumes), working all the exercises as you go; then work through all the problems at Project Euler.
If you aren't damned good at algorithms after that, there is probably no hope for you. LOL!
P.S. SICP is available freely online, but you have to buy AoCP (get the international, not-for-release-in-north-america edition used for 30 USD). And I haven't done this yet myself (I'm trying when I have free time).
I can recommend the book "Introductory Logic and Sets for Computer Scientists" by Nimal Nissanke (Addison Wesley). The focus is on set theory, predicate logic etc. Basically the maths of solving problems in code if you will. Good stuff and not too difficult to work through.
Good luck...Kevin
Great
how about trying the problems in Project Euler? http://projecteuler.net
There are a ton of problems there, and for each one you could practice at developing an algorithm. Once you get a good-enough implementation, you can access other people's solutions (for a particular problem) and see how others have done it. Don't think of it as math problems, but rather as problems in creating algorithms for solving math problems
Ok, so to sum up the suggestions:
The most effective way to improve this ability is to solve problem as frequently as possible.Either real world problems(such as the elevators "algorithm" which is already suggested) or exercises from books like CLRS(great, I already own it :-)).But I didn't see comments about maths and I don't know what to suppose(if you agree or not).:-s
The links were great.I will definitely use them.I also think that it will be a good exercise to solve problems from national/international informatics contests or to read the way a mathematician proves a theorem.
Thank you all again.Feel free to suggest more, although I am already satisfied with the solutions mentioned.

How to cultivate algorithm intuition?

When faced with a problem in software I usually see a solution right away. Of course, what I see is usually somewhat off, and I always need to sit down and design (admittedly, I usually don't design enough), but I get a certain intuition right away.
My problem is I don't get that same intuition when it comes to advanced algorithms. I feel much more up to the task of building another Facebook then building another Google search, or a Music Genom project. It's probably because I've been building software for quite some time, but I have little experience with composing algorithms.
I would like the community's advice on what to read and what projects to undertake to be better at composing algorithms.
(This question has nothing to do with Algorithmic composition. Well, almost nothing)
+1 To whoever said experience is the best teacher.
There are several online portals which have a lot of programming problems, that you can submit your own solutions to, and get an automated pass/fail indication.
http://www.spoj.pl/
http://uva.onlinejudge.org/
http://www.topcoder.com/tc
http://code.google.com/codejam/contests.html
http://projecteuler.net/
https://codeforces.com
https://leetcode.com
The USACO training site is the training program that all USA computing olympiad participants go through. It goes step by step, introducing more and more complex algorithms as you go.
You might find it helpful to perform algorithms physically. For example, when you're studying sorting algorithms, practice doing each one with a deck of cards. That will activate different parts of your brain than reading or programming alone will.
Steve Yegge referred to "The Algorithm Design Manual" in one of his rants. I haven't seen it myself, but it sounds like it's just the ticket from his description.
My absolute favorite for this kind of interview preparation is Steven Skiena's The Algorithm Design Manual. More than any other book it helped me understand just how astonishingly commonplace (and important) graph problems are – they should be part of every working programmer's toolkit. The book also covers basic data structures and sorting algorithms, which is a nice bonus. But the gold mine is the second half of the book, which is a sort of encyclopedia of 1-pagers on zillions of useful problems and various ways to solve them, without too much detail. Almost every 1-pager has a simple picture, making it easy to remember. This is a great way to learn how to identify hundreds of problem types.
problem domain
First you must understand the problem domain. An elegant solution to the wrong problem is no good, nor is an inefficient solution to the right problem in most cases. Solution quality, in other words, is often relative. A simple scheduling problem that has a deterministic solution that takes ten minutes to run may be fine if schedules are realculated once per week, but if schedules change several times a day then a genetic algorithm solution that converges in a few seconds may be required.
decomposition and mapping
Second, decompose the problem into sub-problems and known/unknown elements that correspond to elements of the solution. Sometimes this is obvious, e.g. to count widgets you need a way of identifying widgets, an incrementable counter, and a way of storing the count. Sometimes it is not so obvious. Sometimes you have to decompose the problem, the domain, and possible solutions at the same time and try several different mappings between them to find one that leads to the correct results [this is the general method].
model
Model the solution, in your head at least, and walk through it to see if it works correctly. Adjust as necessary (See decomposition and mapping, above).
composition/interfaces
Many times you can find elements of the problem and elements of the solution that map to each other and produce partial results that are useful. This composition and interface construction provides the kernal of the solution, and also serves to reduce the scope of the problem remaining. So then you just loop back to the top with a smaller initial problem, and go through it again.
experience
Experience is the best teacher, of course, but reading about different kinds of problems and solutions will also be helpful. Studying some of the well-known algorithms and their applications is likewise very helpful, e.g. Dijkstra, Bresenham, Unification, and of course, graph theory.
I am not sure intuition can be cultivated, but I think I know what you are asking. The more problems you solve, the more information and experience you have at your disposal for future problems. So, I say just practice. Practice programming real world applications and you run into plenty of problems. Sometimes, solving puzzles can be very educational as well.
I try to find physical analogues when I'm looking at a complex problem.

Resources