See: http://kks.cabal.fi/GoodEnoughSearch
I have gone through quite many papers and sites. I have not found where this algorithm has been presented before, or that someone has made something similar, but better or more general. The algorithm is pretty simple and thus should be found quite easily by anyone facing the same kind of problems I have faced.
It reminds me of the Monte Carlo method.
http://en.wikipedia.org/wiki/Monte_Carlo_method
Related
I have been having trouble finding the best solutions to data structures and algorithms questions issued by interviewers. I was wondering how you guys approach these problems. Is it a matter of just practicing solving all kinds of problems to get the experience, or do you guys have systematic ways to recognize certain types of problems? Can you recommend books that could help me? I've reread a lot of Introduction to Algorithms by CLRS, and I'm sure I could refresh on fundamental CS concepts.
I have developed some common sense in recognizing types of problems. E.g. if I am able to recognize that solutions to later iterations of a problem depend on past solutions, and ultimately stem from known base solutions, I know this is a dynamic programming problem. Maybe I need to study more to further develop this common sense.
Thanks for reading.
I'm not sure SO is a best place for this question, but I recommend you "Cracking the Coding Interview" by Gayle Laakmann McDowell.
Classical books about algorithms are ok, but they focused on more fundamental and "academic" stuff. CCI is focused specially on solving interview questions.
"Designing the right algorithm for a given application is a difficult job. It requires a major creative act, taking a problem and pulling a solution out of the ether. This is much more difficult than taking someone else's idea and modifying it or tweaking it to make it a little better. The space of choices you can make in algorithm design is enormous, enough to leave you plenty of freedom to hang yourself".
I have studied several basic design techniques of algorithms like Divide and Conquer, Dynamic Programming, greedy, backtracking etc.
But i always fail to recognize what principles to apply when i come across certain programming problems. I want to master the designing of algorithms.
So can any one suggest a best place to understand the principles of algorithm design in depth.....
I suggest Programming Pearls, 2nd edition, by Jon Bentley. He talks a lot about algorithm design techniques and provides examples of real world problems, how they were solved, and how different algorithms affected the runtime.
Throughout the book, you learn algorithm design techniques, program verification methods to ensure your algorithms are correct, and you also learn a little bit about data structures. It's a very good book and I recommend it to anyone who wants to master algorithms. Go read the reviews in amazon: http://www.amazon.com/Programming-Pearls-2nd-Edition-Bentley/dp/0201657880
You can have a look at some of the book's contents here: http://netlib.bell-labs.com/cm/cs/pearls/
Enjoy!
You can't learn algorithm design just from reading books. Certainly, books can help. Books like Programming Pearls as suggested in another answer are great because they give you problems to work. Each problem forces you to think about how to solve a particular type of problem.
The idea is that you expose yourself to many different types of problems and their solutions. In doing so, you learn how to examine a problem and see if it shares anything in common with problems you've already seen. In that regard, it's not a whole lot different than the way you learned how to solve "word problems" in math class. Granted, most algorithm problems are more complex than having to figure out where on the tracks the two trains will collide, but the way you learn how to solve the problems is the same. You learn common techniques used to solve simple problems, then combine those techniques to solve more complex problems, etc.
Read, practice, lather, rinse, repeat.
In addition to books like Programming Pearls, there are sites online that post different programming challenges that you can test yourself on. It helps if you have friends or co-workers who also are interested in algorithms, because you can bounce ideas off each other and pose interesting challenges, or work together to come up with solutions to problems.
Did I mention that it takes practice?
"Mastering" anything takes time. A long time. A popular theory is that it takes 10,000 hours of practice to become an expert at anything. There's some dispute about that for particular endeavors, but in general it's true. You don't master anything overnight. You have to study. And practice. And read what others have done. Study some more and practice some more.
A good book about algorithm design is Kleinbeg Tardos. Every design technique depends on the problem that you are going to tackle. It is very important to do the exercises in the algorithm books and have feedback from teachers about that.
If there exist a locally optimal choice taht brings the globally optimal solution you can use a greedy algorithm.
If the problem has optimal substructure, you can use dynamic programming.
Lately I have been stuck on improving my algorithmic skills. And at this point I am finding myself out of good material for solving grid problems based on dfs and bsf. I somehow managed to do http://www.spoj.pl/problems/POUR1/ with a brute force logic but i recently go-ogled to find out that the problem can be done by bfs. But I can't figure out exactly how to go about it. Can someone please provide some text to read or some kind of explanation to the above mentioned problem so I can add this to my skill set. It would be extremely kind if you could even help me out for these techniques in problems like these http://www.codechef.com/problems/MMANT/ .please help as soon as possible I am really stuck in these kind of problems ant can't move on. It would also be really kind if u could provide a list of good questions about Binary Indexed Trees and segment trees and some more examples of their usage.
Thanks for the help!! :)
One resource I've found useful is The Algorithmist:
The Algorithmist is a resource dedicated to anything algorithms - from
the practical realm, to the theoretical realm. There are also links
and explanation to problemsets.
Also The Algorithm Design Manual by Steve Skiena is extremely useful, especially the second part.
I have a decent grasp of NP Complete problems; that's not the issue. What I don't have is a good sense of where they turn up in "real" programming. Some (like knapsack and traveling salesman) are obvious, but others don't seem obviously connected to "real" problems.
I've had the experience several times of struggling with a difficult problem only to realize it is a well known NP Complete problem that has been researched extensively. If I had recognized the connection more quickly I could have saved quite a bit of time researching existing solutions to my specific problem.
Are there any resources (online or print) that specifically connect NP Complete to real world instances?
Edit:
For example, I was working on a program that tried to divide students into groups based on age, grade, and school of origin, which is essentially a graph partitioning problem. It took me a while to realize the connection.
I have found that Computers and Intractability is the definitive reference on this topic.
Usually the connection you are talking about must be extracted with a so-called reduction, for example you reduce 3-SAT to the problem you are working with and then you can conclude that your problem has the same complexity of it.
This passage is not trivial, since you have to prove that you can turn every problem instance l of a known NP-Hard problem L into an instance c of your problem C using a deterministic polinomyal algorithms.
So, except from learning basical correlations of common NP-Hard problems using your memory, there's no way to be sure if a problem is similar to another NP-Hard without first trying to guessing and then proving it, you have to be smart.
here is a wiki link:
http://wapedia.mobi/en/List_of_NP-complete_problems
Notice it says
This list is in no way comprehensive (there are more than 3000 known NP-complete problems)
probably it would be a great task if anyone could compile such list.
A theorist should try to understand/proof an NP-Complete/Hard problem. But, a programmer doesn't have that time to. He needs a list.
Am I correct?
I think you should google it. And, read through all the links. Add any new problem found in the link to your list.
Hope it helps
PS : Don't forget to post the list when you're finished :P
For developing better intuition the book "The Algorithm Design Manual, Second Edition" by Skiena (excerpts on google books) is simply great.
List in the back with problems
(including hard problems), that
include an illustration and a
discussion (often) with a real world
example.
Covers both the theoretical
and practical side of things, often
talking about actual code.
Read excepts online here (see some examples in chapters 14):
http://books.google.dk/books?id=7XUSn0IKQEgC&printsec=frontcover#v=onepage&q&f=false
Chapter 16 (not online) discusses some hard problems, including graph partition.
The other day I thought I'd attempt creating the Fibonacci algorithm in my code, but I've never been good at maths.
I ended up writing my own method with a loop but it seemed inefficient or not 'the proper way'.
Does anyone have any recommendations/reading material on implementing algorithms in code?
I find Project Euler useful for this kind of thing. It forces you to think about an algorithm and then implement it. Many of the questions then have extensive discussions on how to solve the problem (from the naive solutions to some pretty ingenious ones) that you can use to see what you did right and wrong.
In the discussion threads you'll find various implementations from other people in many different languages too. Coming up with a solution yourself and then comparing it to that from other people is (imho) a good way to learn.
Both of these introductory books have good information about this sort of thing:
How To Design Programs and moreso Structure and Interpretation of Computer Programs
Both are somewhat funcitonal (and scheme) oriented, but that's a natural fit for these sorts of problems.
On top of that, you might get quite a bit out of Project Euler
Derive your algorithm test-driven. I've been able to write much more complex algorithms correctly by using TDD than I was before.
Go on youtube and look at some of the lectures on Introduction to Algorithms. There are some really, really good lectures that break down some of the most common algorithms such as the Fibonacci series and how to optimize them.
Start reading about O notation so you can understand how your algorithm grows with variable size input and how to classifiy the run-time of the algorithm you have.
Start with this video series which I found excellent material on the subject:
Algorithms Lecture
If you can't translate pseudo code for a fibonacci function to your language, then you should go and find a basic tutorial for your language, since it seems that you have not yet grasped its basic idioms.
If you have a working solution, but feel insecure about it, show it to others for review.