Travelling Salesman (TSP) Performance - algorithm

Can anybody tell me, how can I compare TSP Optimal and heuristics? I have implemented TSP but don't know how can I compare them. Infact, how can I find the optimal cost of the TSP? Any method or guess?
Thanks

Check the optimal solution with well-known benchmark instances:
Download the data from TSPLIB here and compare your solutions with the optimal values here

Solving the TSP to optimality is an NP-hard problem.
To assess the quality of a heuristic solution, you have several options:
Compare it to heuristic solutions produced by other algorithms. This will give you an idea of which heuristics work better on the given instance, but obviously won't tell you anything about how close you are to the optimal solution.
Compare to the optimal solution. Concorde is probably your best bet for computing this.
Compute a lower bound for the TSP instance, and compare the heuristic solution to that. The two standard approaches are the Held-Karp lower bound and the assignment problem relaxation.
Use instances with known optimal solutions, such as those in TSPLIB.

Related

Is applying AI ok and/or practical to finding the optimal solution to the algorithmic problem

Both in learning environment and practice, from time to time I had to use different algorithms to solve problems. But the more I use them, the more it seems like the AI could be deployed to try finding the optimal solution, especially to the NP-complete problems, since the AI "progression" is easily tracked
If we, for example, never knew how to solve knapsack problem efficiently; I wonder, is applying AI practical and/or ever OK to finding the optimal solution to the given problem?
AI algorithms in general can find an approximation to basically any function. They are so powerful because this is true even for extremely complex functions with many input parameters and/or many output parameters and/or a very complicated internal structure.
On the other hand, there is no known way to solve NP-complete problems "quickly". In practice, you would often have to search through a huge solution space for finding the optimal solution. This is why people use heuristic methods and approximation algorithms to efficiently find a "sufficiently good" solution.
So yes, you can use AI to find a good approximate solution (and possibly even a better one than with traditional heuristics) to a computationally hard problem.
But no, if the problem is NP-complete, you still cannot know that you have found the optimal solution.

The greedy algorithm and implementation

Hello I've just started learning greedy algorithm and I've first looked at the classic coin changing problem. I could understand the greediness (i.e., choosing locally optimal solution towards a global optimum.) in the algorithm as I am choosing the highest value of coin such that the
sum+{value of chosen coin}<=total value . Then I started to solve some greedy algorithm problem in some sites. I could solve most of the problems but couldn't figure out exactly where the greediness is applied in the problem. I coded the only solution i could think of, for the problems and got it accepted. The editorials also show the same way of solving problem but i could not understand the application of greedy paradigm in the algorithm.
Are greedy algorithms the only way of solving a particular range of problems? Or they are one way of solving problems which could be more efficient?
Could you give me pseudo codes of a same problem with and without the application of greedy paradigm?
There are lots of real life examples of greedy algorithms. One of the obvious is the coin changing problem, to make change in a certain currency, we repeatedly dispense the largest denomination, thus , to give out seventeen dollars and sixty one cents in change, we give out a ten-dollar bill, a five-dollar bill, two one-dollar bills, two quarters , one dime, and one penny. By doing this, we are guaranteed to minimize the number of bills and coins. This algorithm does not work in all monetary systems...more here
I think that there is always another way to solve a problem, but sometimes, as you've stated, it probably will be less efficient.
For example, you can always check all the options (coins permutations), store the results and choose the best, but of course the efficiency is terrible.
Hope it helps.
Greedy algorithms are just a class of algorithms that iteratively construct/improve a solution.
Imagine the most famous problem - TSP. You can formulate it as Integer Linear Programming problem and give it to an ILP solver and it will give you globally optimal solution (if it has enought time). But you could do it in a greedy way. You can construct some solution (e.g. randomly) and then look for changes (e.g. switch an order of two cities) that improve your solution and you keep doing these changes until there is no such change possible.
So the bottom line is: greedy algorithms are only a method of solving hard problems efficiently (in time, but not necessary in the quality of solution), but there are other classes of algorithms for solving such problems.
For coins, greedy algorithm is also the optimal one, therefore the "greediness" is not as visible as with some other problems.
In some cases you prefer solution, which is not the best one, but you can compute it much faster (computing the real best solution can takes years for example).
Then you choose heuristic, that should give you the best results - based on average input data, its structure and what you want to want to accomplish.
On wikipedia, there is good solution on finding the biggest sum of numbers in tree
Imagine that you have for example 2^1000 nodes in this tree. To find optimal solution, you have to visit each node once. Personal computer today is not able to do this in your lifetime, therefore you want some heuristic. Greedy alghoritm however find solution in just 1000 steps (which does not take more than one milisecond)

Divide and conquer, dynamic programming and greedy algorithms!

When I have a problem with optimal substructur and no subproblem shares subsubproblems then I can use a divide and conquer algorithm to solve it?
But when the subproblem shares subsubproblems (overlapping subproblems) then I can use dynamic programming to solve the problem?
Is this correct?
And how is greedy algorithms similar to dynamic programming?
When I have a problem with optimal
substructur and no subproblem shares
subsubproblems then I can use a divide
and conquer algorithm to solve it?
Yes, as long as you can find an optimal algorithm for each kind of subproblem.
But when the subproblem shares
subsubproblems (overlapping
subproblems) then I can use dynamic
programming to solve the problem?
Is this correct?
Yes. Dynamic programming is basically a special case of the family of Divide & Conquer algorithms, where all subproblems are the same.
And how is greedy algorithms similar
to dynamic programming?
They're different.
Dynamic programming gives you the optimal solution.
A Greedy algorithm usually give a good/fair solution in a small amount of time but it doesn't assure to reach the optimum.
It is, let's say, similar because it usually divides the solution construction in several stages in which it takes choices that are locally optimal. But if stages are not optimal substructures of the original problem, then normally it doesn't lead to the best solution.
EDIT:
As pointed out by #rrenaud, there are some greedy algorithms that have been proven to be optimal (e.g. Dijkstra, Kruskal, Prim etc.).
So, to be more correct, the main difference between greedy and dynamic programming is that the former is not exhaustive on the space of solutions while the latter is.
In fact greedy algorithms are short-sighted on that space, and each choice made during solution construction is never reconsidered.
Dynamic program uses bottom-up approach, saves the previous solution and refer it, this will allow us to make optimal solution among all available solutions, whereas greedy approach uses the top-down approach, so it takes an optimal solution from the locally available solution, will not take the previous level solutions which leads to the less optimized solution.
Dynamic= bottom up, optimal solution
Greedy= top down, less optimal, less time consuming

Minimum Bandwidth Problem

I'm interesting the NP-complete "minimum bandwidth" problem for finding the minimum bandwidth of a graph. For those not familiar, here is a link about it...
http://en.wikipedia.org/wiki/Graph_bandwidth
I've implemented the Cuthill-McKee algorithm, and this was very successful at giving me a permutation of the vertices in which the bandwidth was reduced; however, I'm looking for the minimum bandwidth, not just a reduced bandwidth that is close. If any of you have experience with this problem, what algorithms provide solutions that are the minimum and not just reduced? I don't need actual implementation of any algorithm, I just want suggestions for what algorithms to research that yield actual minimum bandwidths.
That's interesting problem, but when I read Wiki (your link):
Both the unweighted and weighted
versions are special cases of the
quadratic bottleneck assignment
problem. The bandwidth problem is
NP-hard, even for some special
cases.[4] Regarding the existence of
efficient approximation algorithms, it
is known that the bandwidth is NP-hard
to approximate within any constant,
and this even holds when the input
graphs are restricted to caterpillar
trees (Dubey, Feige & Unger 2010). On
the other hand, a number of
polynomially-solvable special cases
are known.
So wiki says it's NP-Hard to approximate it with any constant (So there is no PTAS for this problem) and your chance is just use heuristic algorithms, sure brute force algorithm works, (numbering node with numbers between 1..n randomly in startup, after that use brute force) but you should spend 1000 year to solve it for caterpillar.
You should search for heuristic algorithms, not approximation and exact algorithms.
As it is NP complete you have to use some kind of "brute force" algorith. So mainly you have the different brute force as option, e.g. like branch-and-bound or linear programming (its LIP, so its in NP).
As it is NP complete you can also take every solution to a different NP complete problem (TSP, SAT,...) by transforming the problem instance from the NP-completeness proof, apply the algorith, and transform it back.
The simplest improvement you can do, is probably to take the result of your Cuthill-McKee algorithm and throw Tabu Search on it.
See this answer for an overview on some of the algorithms that can be applied.

What is the difference between a heuristic and an algorithm?

What is the difference between a heuristic and an algorithm?
An algorithm is the description of an automated solution to a problem. What the algorithm does is precisely defined. The solution could or could not be the best possible one but you know from the start what kind of result you will get. You implement the algorithm using some programming language to get (a part of) a program.
Now, some problems are hard and you may not be able to get an acceptable solution in an acceptable time. In such cases you often can get a not too bad solution much faster, by applying some arbitrary choices (educated guesses): that's a heuristic.
A heuristic is still a kind of an algorithm, but one that will not explore all possible states of the problem, or will begin by exploring the most likely ones.
Typical examples are from games. When writing a chess game program you could imagine trying every possible move at some depth level and applying some evaluation function to the board. A heuristic would exclude full branches that begin with obviously bad moves.
In some cases you're not searching for the best solution, but for any solution fitting some constraint. A good heuristic would help to find a solution in a short time, but may also fail to find any if the only solutions are in the states it chose not to try.
An algorithm is typically deterministic and proven to yield an optimal result
A heuristic has no proof of correctness, often involves random elements, and may not yield optimal results.
Many problems for which no efficient algorithm to find an optimal solution is known have heuristic approaches that yield near-optimal results very quickly.
There are some overlaps: "genetic algorithms" is an accepted term, but strictly speaking, those are heuristics, not algorithms.
Heuristic, in a nutshell is an "Educated guess". Wikipedia explains it nicely. At the end, a "general acceptance" method is taken as an optimal solution to the specified problem.
Heuristic is an adjective for
experience-based techniques that help
in problem solving, learning and
discovery. A heuristic method is used
to rapidly come to a solution that is
hoped to be close to the best possible
answer, or 'optimal solution'.
Heuristics are "rules of thumb",
educated guesses, intuitive judgments
or simply common sense. A heuristic is
a general way of solving a problem.
Heuristics as a noun is another name
for heuristic methods.
In more precise terms, heuristics
stand for strategies using readily
accessible, though loosely applicable,
information to control problem solving
in human beings and machines.
While an algorithm is a method containing finite set of instructions used to solving a problem. The method has been proven mathematically or scientifically to work for the problem. There are formal methods and proofs.
Heuristic algorithm is an algorithm that is able to produce an
acceptable solution to a problem in
many practical scenarios, in the
fashion of a general heuristic, but
for which there is no formal proof of
its correctness.
An algorithm is a self-contained step-by-step set of operations to be performed 4, typically interpreted as a finite sequence of (computer or human) instructions to determine a solution to a problem such as: is there a path from A to B, or what is the smallest path between A and B. In the latter case, you could also be satisfied with a 'reasonably close' alternative solution.
There are certain categories of algorithms, of which the heuristic algorithm is one. Depending on the (proven) properties of the algorithm in this case, it falls into one of these three categories (note 1):
Exact: the solution is proven to be an optimal (or exact solution) to the input problem
Approximation: the deviation of the solution value is proven to be never further away from the optimal value than some pre-defined bound (for example, never more than 50% larger than the optimal value)
Heuristic: the algorithm has not been proven to be optimal, nor within a pre-defined bound of the optimal solution
Notice that an approximation algorithm is also a heuristic, but with the stronger property that there is a proven bound to the solution (value) it outputs.
For some problems, noone has ever found an 'efficient' algorithm to compute the optimal solutions (note 2). One of those problems is the well-known Traveling Salesman Problem. Christophides' algorithm for the Traveling Salesman Problem, for example, used to be called a heuristic, as it was not proven that it was within 50% of the optimal solution. Since it has been proven, however, Christophides' algorithm is more accurately referred to as an approximation algorithm.
Due to restrictions on what computers can do, it is not always possible to efficiently find the best solution possible. If there is enough structure in a problem, there may be an efficient way to traverse the solution space, even though the solution space is huge (i.e. in the shortest path problem).
Heuristics are typically applied to improve the running time of algorithms, by adding 'expert information' or 'educated guesses' to guide the search direction. In practice, a heuristic may also be a sub-routine for an optimal algorithm, to determine where to look first.
(note 1): Additionally, algorithms are characterised by whether they include random or non-deterministic elements. An algorithm that always executes the same way and produces the same answer, is called deterministic.
(note 2): This is called the P vs NP problem, and problems that are classified as NP-complete and NP-hard are unlikely to have an 'efficient' algorithm. Note; as #Kriss mentioned in the comments, there are even 'worse' types of problems, which may need exponential time or space to compute.
There are several answers that answer part of the question. I deemed them less complete and not accurate enough, and decided not to edit the accepted answer made by #Kriss
Actually I don't think that there is a lot in common between them. Some algorithm use heuristics in their logic (often to make fewer calculations or get faster results). Usually heuristics are used in the so called greedy algorithms.
Heuristics is some "knowledge" that we assume is good to use in order to get the best choice in our algorithm (when a choice should be taken). For example ... a heuristics in chess could be (always take the opponents' queen if you can, since you know this is the stronger figure). Heuristics do not guarantee you that will lead you to the correct answer, but (if the assumptions is correct) often get answer which are close to the best in much shorter time.
An Algorithm is a clearly defined set of instructions to solve a problem, Heuristics involve utilising an approach of learning and discovery to reach a solution.
So, if you know how to solve a problem then use an algorithm. If you need to develop a solution then it's heuristics.
Heuristics are algorithms, so in that sense there is none, however, heuristics take a 'guess' approach to problem solving, yielding a 'good enough' answer, rather than finding a 'best possible' solution.
A good example is where you have a very hard (read NP-complete) problem you want a solution for but don't have the time to arrive to it, so have to use a good enough solution based on a heuristic algorithm, such as finding a solution to a travelling salesman problem using a genetic algorithm.
Algorithm is a sequence of some operations that given an input computes something (a function) and outputs a result.
Algorithm may yield an exact or approximate values.
It also may compute a random value that is with high probability close to the exact value.
A heuristic algorithm uses some insight on input values and computes not exact value (but may be close to optimal).
In some special cases, heuristic can find exact solution.
A heuristic is usually an optimization or a strategy that usually provides a good enough answer, but not always and rarely the best answer. For example, if you were to solve the traveling salesman problem with brute force, discarding a partial solution once its cost exceeds that of the current best solution is a heuristic: sometimes it helps, other times it doesn't, and it definitely doesn't improve the theoretical (big-oh notation) run time of the algorithm
I think Heuristic is more of a constraint used in Learning Based Model in Artificial Intelligent since the future solution states are difficult to predict.
But then my doubt after reading above answers is
"How would Heuristic can be successfully applied using Stochastic Optimization Techniques? or can they function as full fledged algorithms when used with Stochastic Optimization?"
http://en.wikipedia.org/wiki/Stochastic_optimization
One of the best explanations I have read comes from the great book Code Complete, which I now quote:
A heuristic is a technique that helps you look for an answer. Its
results are subject to chance because a heuristic tells you only how
to look, not what to find. It doesn’t tell you how to get directly
from point A to point B; it might not even know where point A and
point B are. In effect, a heuristic is an algorithm in a clown suit.
It’s less predict- able, it’s more fun, and it comes without a 30-day,
money-back guarantee.
Here is an algorithm for driving to someone’s house: Take Highway 167
south to Puy-allup. Take the South Hill Mall exit and drive 4.5 miles
up the hill. Turn right at the light by the grocery store, and then
take the first left. Turn into the driveway of the large tan house on
the left, at 714 North Cedar.
Here’s a heuristic for getting to someone’s house: Find the last
letter we mailed you. Drive to the town in the return address. When
you get to town, ask someone where our house is. Everyone knows
us—someone will be glad to help you. If you can’t find anyone, call us
from a public phone, and we’ll come get you.
The difference between an algorithm and a heuristic is subtle, and the
two terms over-lap somewhat. For the purposes of this book, the main
difference between the two is the level of indirection from the
solution. An algorithm gives you the instructions directly. A
heuristic tells you how to discover the instructions for yourself, or
at least where to look for them.
They find a solution suboptimally without any guarantee as to the quality of solution found, it is obvious that it makes sense to the development of heuristics only polynomial. The application of these methods is suitable to solve real world problems or large problems so awkward from the computational point of view that for them there is not even an algorithm capable of finding an approximate solution in polynomial time.

Resources