Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a small confusion with the nature of heuristics.
We know that heuristics need not give correct outputs for all input instances.
But then, why are heuristics proposed??
Heuristics are used to trade off performance (usually execution speed, but also memory consumption) with potential accuracy or generality. For example, your anti virus software uses heuristics to characterize what a virus might look like, and can take advantage of that piece of information to determine which files it should spend more time analyzing. A good heuristic has the property that it can save substantial time with minimal cost.
In graph traversal theory, a heuristic for an A* search algorithm need not be perfect. It just needs to have a predicted cost function h(x) that is less than or equal to the true cost to the goal state in order to guarantee an optimal solution. The closer h(x) equals the true cost, the quicker an optimal solution will be found.
Let me give you an example which might help you understand the importance of heuristics.
In Artificial Intelligence, search problems are mainly classified as blind search and directed search. Blind search is where you make use of algorithms such as BFS and DFS and there is a reason they are called blind search, they don't have any knowledge about the direction you should go, you just have to explore and explore until you reach the goal node, imagine the time and space complexity for those algorithms.
Now if you look at the directed search algorithm such as A*, where you have some kind of heuristic function or in simple terms an assumption about which direction you should take the next step.
Although heuristics does not guarantee the best result but rather will try to give you a better solution and sometimes even the best. There are so many classes of problems (Ex. games you play) where a better solution does the task rather than wasting so much time and space in finding the best solution.
I hope it helps.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I'm new to stack overflow, but I'm here because I've searched everywhere and can't seem to find much info on the time complexity of A*, besides off the wiki. I would also like to compare it to Dijkstra's algorithm and see how adding a heuristic in A* improves it's performance.
I know it's a very advanced topic, but I just can't fully understand it from the info given on wiki (Even the analysis of Dijkstra's algorithm on wiki seems quite advanced).
https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
https://en.wikipedia.org/wiki/A*_search_algorithm
I would greatly appreciate it if anyone could explain the time complexity in more detail, or suggest any reading / learning material on the topic. I do have a good understanding of the A* algorithm, but I've just started learning the analysis thereof now.
The answer is simply it depends. A star by itself is no complete algorithm. A star is Dijkstra with a heuristic that fulfills some properties (like triangle inequality). You can select different heuristic functions that lead to different time complexities. The simplest heuristic is straight line distance. However there is also more advanced stuff like landmarks heuristic for example.
In the worst case you always need to explore the whole neighborhood so you won't get better than Dijkstra from a general point of analysis.
However in most practical applications you can achieve much better bounds.
This is only when you know some properties of your graph and of your heuristic function. You then can make some assumptions which lead to a better complexity, but only for those instances.
For example if you know that the straight line distance is always the correct distance in your graph and you use a straight line distance heuristic, then your A star will have the best possible complexity with Theta(1). However this is a much to strong assumption for most applications. But you can think of where this goes.
The bottom line is: It extremely depends on the structure of your graph and your heuristic function.
Here's a lecture on A star as you ask for learning material: Efficient Route Planning (A*, Landmarks, Set Dijkstra) - University of Freiburg
There is also much on the internet, the algorithm is pretty popular as it is very easy to implement and for most cases already fast enough (non-complex games for example).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have hard time to grasp the key idea of Held-Karp algorithm, how does it reduce the time-complexity?
Is it because it uses Dynamic programming so that time is saved by getting the intermediate result from the cache or because it removes some paths earlier in the calculation?
Also, is it possible to use 2 dimension table to show the calculation for
a simple TSP problem(3 or 4 cities)?
The dynamic programming procedure of the Held–Karp algorithm takes advantage of the following property of the TSP problem: Every subpath of a path of minimum distance is itself of minimum distance.
So essentially, instead of checking all solutions in a naive "top-down", brute force approach (of every possible permutation), we instead use a "bottom-up" approach where all the intermediate information required to solve the problem is developed once and once only. The initial step is the very smallest subpath. Every time we move up to solve a larger subpath, we are able to look up the solutions to all the smaller subpath problems which have already been computed. The time savings come because all of the smaller subproblems have already been solved and these savings compound exponentially (at each greater subpath level). But no "paths are removed" from the calculations–at the end of the procedure all of the subproblems will have been solved. The obvious drawback is that a very large memory size may be required to store all the intermediate results.
In summary, the time savings of the Held–Karp algorithm follow from the fact that it never duplicates solving the solution to any subset (combination) of the cities. But the brute force approach will recompute the solution to any given subset combination many times (albeit not necessarily in consecutive order within a given overall set permutation).
Wikipedia contains a 2D distance matrix example and pseudocode here.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
In general, in computational complexity, we talk about time and space complexity. That is, we think about how much time or space that is necessary for solving some problem.
I would like to know if there is another kind of resource (beyond time and space) that we could use a reference for discussing computacional complexity.
People have considered the number of references to external memory (https://www.ittc.ku.edu/~jsv/Papers/Vit.IO_book.pdf) and the use of cache memory (https://en.wikipedia.org/wiki/Cache-oblivious_algorithm). Where the computations is split between two or more nodes, the complexity of communication between those nodes is of interest (https://en.wikipedia.org/wiki/Communication_complexity) and there are some neat proofs around here.
There are also links between these measures. Most obviously, using almost any resource takes time, so anything that takes no more than T units of time is likely to take no more than O(T) units of any other resource. There is a paper "An Overview of the Theory of Computational Complexity" by Hartmanis and Hopcroft, which puts computational complexity on a firm mathematical footing. This defines a very general notion of computational complexity measures and (Theorem 4) proves that (their summary) "a function which is "easy" to compute in one measure is "easy" to compute in other measures". However this result (like most of the rest of the paper) is in mathematically abstract terms which don't necessarily have any practical consequence in the real world. The connection between the two complexities used here is loose enough that it is entirely possible that polynomial complexity in one measure could be exponential complexity (or worse) in the other measure.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
What I found are:
1. Naive Bayes classifier
2. K nearest neighbors classifier
3. Decision tree Algorithms(C4.5, Random Forest)
4. Kernel Discriminant Analysis
5. Support vector machines
If any other, can someone please help me with the remaining algorithms under this? I need complete list of supervised ML classification algorithms for my academic purpose. Thank you
Although this is an active area of research, I wouldn't say new algorithms are invented every day, not good ones anyway. The invention of a new ML algorithm that is better than the rest in even some semi-important particular cases would be pretty big news.
Usually, known algorithms are adapted to a given problem. Adapting one properly can itself be an area of research (spam classification is done with classical ML algorithms, but it's not trivial to perfect, so is digit recognition etc.)
Regardless, it's hard to find a source that lists all the known, classical algorithms. There are a lot, and it's unlikely that an author somewhere lists them all. They usually list the ones they work with, or the ones they consider the most important.
That said, I'm going to try to give you a longer list, and I'm making this community wiki to encourage other people to add more.
Naive Bayes classifier
K nearest neighbors classifier
Decision tree Algorithms(C4.5, Random Forest)
Kernel Discriminant Analysis
Support vector machines
Logistic Regression
Passive Aggressive Classifiers
Gaussian Processes
Neural networks
The Winnow algorithm
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
what exactly is the brute force algorithm? (besides just the approach only)
when a problem can use brute-force approach, and when not to?
What characteristics are there in an algorithm, when the algorithm uses brute force approach?
1 and 3 : Brute force means that you will go through all possible solutions extensively. For example, in a chess game, if you know you can win in two moves, the brute force will go through all possible combination of moves, without taking anything in consideration. So the little pawn in the back that cannot influence the outcome will still be considered.
2 : As you consider everything, the problem quickly goes out of control. Brute force through 15 moves in chess is impossible because of combinatorial explosion (too many situations to consider). However, more clever algorithms that take into account "knowledge about the problem" can go much further (20-30 moves ahead)
Edit : To clarify, brute force is simplest (dumbest?) way to explore the space of solutions. If you have a problem is set in a countable space (chess moves are countable, passwords are countable, continuous stuff is uncountable) brute force will explore this space considering all solutions equally. In the chess example, you want to checkmate your opponent. This is done via a sequence of moves, which is countable. Brute force will go through all sequence of moves, however unlikely they may be. The word unlikely is important, because it means that if you have knowledge about your problem (you know what is unlikely to be the solution, like sacrificing your queen), you can do much better than brute force.