As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have used a random mutation hill climbing algorithm as part of a project that I am working on, but was wondering whether it would be better to use simulated annealing to minimise the chance of getting stuck in any local optima.
The question I have is which one tends to be generally faster from your experience? Obviously there is a huge wealth of applications for both algorithms; this is more of a generalised pondering, if you like.
Thank you.
There's no way to tell in advance (unless your project is a 100% match to a well studied academic problem like a pure TSP - and even then ...). It depends on your project's constraints and your project's size (and if you implement the algorithms correctly).
So, to be sure, you have to implement both algorithms (and many others, like Tabu Search, ...) and use a Benchmarker like this one to compare them.
That being said, I 'd put my money on Simulated Annealing over Random Mutation Hill Climbing any day :)
Note: Simulated Annealing is a short but difficult algorithm: I only got it right in my 3th implementation and I 've seen see plenty of wrong implementations (that still output a pretty ok solution) in blogs, etc. It's easier just to reuse optimization algorithms.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I see people biased towards DP approach over the greedy approach because it can solve optimization problems. What you guys think which one of them is preferable? I need to collect arguments in favor of preferable technique to argue with my mates. LOL. Ok, DP is used to solve the problems that have optimal substructure and principle of optimality applies on them. But is it enough for DP to be better than greedy approach?
Your question is meaningless without knowing what problem you are trying to solve.
Dynamic Programming is a tool. It is useful for solving a certain class of problems.
Greedy Algorithms are another tools. They are useful in other situations.
It's like asking "Which is better - a hammer or a saw"?
The answer will be very different depending on what you are trying to do.
Let's take the Coin change example. If you take the Greedy approach you might not get to the correct result most of the times but if you take the DP approach you will always get the right result. In fact, it's the only way to solve the problem which is by using DP.
To answer your question, forget about optimality but using Greedy you might not get a correct solution in the first place for certain kind of problems.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What areas of math are prerequisite for learning algorithms?
I guess it depends a lot about the kind of algorithm you want to use and how deeply you want to understand them.
The understand of the usual basic data structures needs almost no math background.
Most of the graphical algorithms requires knowledge of trigonometry and spatial geometry.
Algorithms about physics engine are easier to understand if you have some physics basis
If you want your program to help you to take decisions, you might need to study operational research which is a really huge sub-fields of math which includes graph theory, game theory, optimisation (which then includes analysis and linera albegra)
In any case, having a logic/mathematical mind obviously helps a lot for the understanding and to check/prove that your code can/cannot work.
If you're talking about simple programming you don't really need a lot of math. At this level, your problem solving and logic abilities are more important, but it's necessary that you get instructed in the basics of problem solving by using flow charts and process planing.
In the other side, math is known to improve your abilities and in some areas you would need to know math to achieve the expected results. For example, to create an animation engine knowing linear algebra is more than useful, so its physics.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Can someone give a brief list of Mathematics areas (like functions, calculus etc.,) to learn for understanding the Algorithm Analysis books (like Introduction to Algorithms)?
I would start with discrete mathematics. That would probably give you the best computational basis and intuition for what computer algorithms are about in terms of working with sets and discrete numbers in general. Also, something on data structures and algorithms would help as well. This would give you good background on things like sorting arrays, efficient searches etc. You could then move on to books on artificial intelligence (my best guess), but by this time you should definitely be ready to read some algorithms books. IMO, that is.
UPDATE
Also, calculus never hurts either if you're working with minimization/maximization/optimization problems. That might or might not bee needed depending on the specific algorithms you'd like to work with.
To start with:
number theory, especially induction.
basic set theory, sets and functions.
basic calculus, limits.
logarithms
discrete math (combinations, permutations, etc)
generating functions (adv. discrete math).
For Introduction to Algorithms the only things you really need to know are induction and some basic set theory. For the more advanced parts you also need to know some linear algebra and probability theory.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
Christoph Koutschan has set up an interesting survey that tries to identify the most important algorithms "in the world". Since one of the criteria is that "the algorithm has to be widely used" I though that extending the survey to the huge group of users at Stack Overflow would be a natural thing to do.
So, what do you think? Which algorithms deserve a place in the Algorithm Hall of Fame?
I somewhat like this algorithm:
Write code.
Test code. If buggy, go to step 3. If not, go to step 4.
Rewrite code, then go back to step 2.
Get somebody else to test your code. If they discover any bugs, return to step 3, otherwise go to step 5.
Congratulations, your code has no obvious bugs! Now you wait for a user to stumble upon a hidden one, in which case you return to step 3 once again unless you're lucky and are no longer providing support for the code in question.
I'd say binary search since it's usually the first algorithm people learn. And the RSA encryption algorithms are pretty important.
Hashing, since it's the basis for so much in security, data structures, etc. Hashing algorithms have generated a lot of Ph.D. dissertations.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
The community reviewed whether to reopen this question 7 months ago and left it closed:
Original close reason(s) were not resolved
I just wanted to learn name of algorithms.. thanks
A general strategy in game algorithms is the minimax strategy, augmented with alpha-beta pruning. The minimax algorithm finds the best move, and alpha-beta pruning prevents it from going into branches of the game tree that cannot produce a better result than previous branches already have.
However, the chess game tree is too large to be completely examined. That is why computer chess engines only examine the tree up to a certain depth, and then use various methods to evaluate the positions. Many of these methods are based on heuristics. Also, a serious chess-playing program will have a library of openings so that it can play in the beginning by just consulting that library and not having to examine the game tree. Finally, many end games are completely solved, and these are also programmed in as a library.
Minimax
If you need an in-depth knowledge about AI algorithms, I think "artificial intelligence modern approach" book is the best source.
Wikipedia is a safe bet as a starting point. Did you look there?
Rybka seems to be a contender.
Have a look at the some of the free source chess codes, for instance Crafty or even better how about Fruit? It plays pretty much almost the same strength of Rybka. But there are many new algos out there. Day will come when human chess players will have to just say I am not playing vs this engine, and this article pretty much sums it up --> http://www.mychessblog.com/man-versus-machine-when-a-computer-will-become-world-chess-champion/