I am trying to do c++ program.I am trying to do problem in which i have numbers of points. Now i need to find the path that goes through all the points. This is not actually TSP because as per my knowledge in TSP it is possible to travel from all points to every other points. But in my case the path network between the points is fixed and i just need to find the suitable path that goes through all the points provided that all points may not have connection to every other point..so what algorithm am i supposed to follow.
It seems you are looking for a way to traverse a graph? If so have you tried Breadth first search http://en.wikipedia.org/wiki/Breadth-first_search or Depth first search http://en.wikipedia.org/wiki/Depth-first_search to traverse your graph.
You want to find a Hamiltonian path for a graph.
In the mathematical field of graph theory, a Hamiltonian path (or
traceable path) is a path in an undirected graph that visits each
vertex exactly once. A Hamiltonian cycle (or Hamiltonian circuit) is a
Hamiltonian path that is a cycle. Determining whether such paths and
cycles exist in graphs is the Hamiltonian path problem, which is
NP-complete.
Some techniques that exist :
There are n! different sequences of vertices that might be Hamiltonian
paths in a given n-vertex graph (and are, in a complete graph), so a
brute force search algorithm that tests all possible sequences would
be very slow. There are several faster approaches. A search procedure
by Frank Rubin divides the edges of the graph into three classes:
those that must be in the path, those that cannot be in the path, and
undecided. As the search proceeds, a set of decision rules classifies
the undecided edges, and determines whether to halt or continue the
search. The algorithm divides the graph into components that can be
solved separately. Also, a dynamic programming algorithm of Bellman,
Held, and Karp can be used to solve the problem in time O(n2 2n). In
this method, one determines, for each set S of vertices and each
vertex v in S, whether there is a path that covers exactly the
vertices in S and ends at v. For each choice of S and v, a path exists
for (S,v) if and only if v has a neighbor w such that a path exists
for (S − v,w), which can be looked up from already-computed
information in the dynamic program.
Andreas Björklund provided an alternative approach using the
inclusion–exclusion principle to reduce the problem of counting the
number of Hamiltonian cycles to a simpler counting problem, of
counting cycle covers, which can be solved by computing certain matrix
determinants. Using this method, he showed how to solve the
Hamiltonian cycle problem in arbitrary n-vertex graphs by a Monte
Carlo algorithm in time O(1.657n); for bipartite graphs this algorithm
can be further improved to time O(1.414n).
For graphs of maximum degree three, a careful backtracking search can
find a Hamiltonian cycle (if one exists) in time O(1.251n).
Related
I'm aware that the number of k-length walks between two vertices can be found by finding the kth power of the adjacency matrix, but walks include the traversal of a single edge multiple times in the calculation.
EDIT: I only want to count them not compute them, preferably using matrix algebra. I could do a modified DFS, but thats less efficient than matrix math.
In general, there is no known way to accomplish this. One way to see this is that if you pick k to be the number of nodes in the graph, then you are asking for the number of Hamiltonian paths in the graph. However, the question of determining whether a graph contains a Hamiltonian path is a canonical NP-complete problem, and unless P = NP there's no polynomial-time algorithm for it.
Stated differently - the Hamiltonian path problem reduces to your problem in polynomial time. That makes your problem NP-hard, which means there's no known polynomial-time algorithm for it.
I have a graph, I want to walk through the graph (not necessary through all vertices), always taking path with greates weight. I cannot go through same vertex twice, I stop if there are no more moves I can make. What is the complexity? I assume it's "n" (where n is number of vertices) but I'm not sure.
If you can't go through the same vertice twice, your upper bound for edge traversals is n.
It's easy to think of examples where that would be a tight bound (a single chain of vertices connected for example).
However, keep in mind that complexity is for a given algorithm, not a general task, you haven't described your algorithm or how your graph is organized, so this question doesn't have any meaning.
If for example the graph is a clique, perhaps picking the highest weight edge for each traversal would itself take n computation steps (if the edges are kept in an unsorted list kept in each vertice), making the naive algorithm O(n^2) in this case. Other representations may have different complexity, but require different algorithms.
EDIT
If you're asking about finding the path with greatest overall weight (which may require you in some traversals to pick an edge that doesn't have the highest weight), than the problem is NP-hard. If it had a polynomial algorithm, then you could take an unweighted graph and find the longest path (a known NP hard problem as jimifiki pointed), and solve it with that algorithm.
From Longest Path Problem
This problem is called the Longest Path Problem, and is NP-complete.
My graph contains no such edges which connect a vertex to itself. There is only one edge between two vertices. From Wikipedia i got to know about some algorithm which are used for calculating shortest path based on the given conditions. One of the most famous algorithm is Dijkstra's algorithm, which finds a shortest paths from source vertex to all other vertices in the graph.
But by using Dijkstra's algorithm, i am unnecessary exploring all the vertices, however my goal is just to find shortest path from single source to single destination. Which strategy should i use here? So that i need not to explore all other vertices.
One of my approach is to use bidirectional bfs. By bidirectional bfs i mean to apply two bfs one from source node, another one from destination node. As soon as for the first time when i find any same child in both tree,i can stop both bfs .Now the path from source to that child union path from child to destination would be my shortest path from source to destination.
But out of all the approaches described by Wikipedia and bidirectional bfs, which one suits best for my graph?
It depends if there's any apparent heuristic function that you could use or if you don't have any further usable information about your graph.
Your options are:
BFS: in general case or if you don't care about computation time that much.
Dijkstra (Dijkstra "is" BFS with priority queue): if your edges have weights/prices (non negative) and you care about CPU time.
A* (A* "is" Dijkstra with heuristic function - e.g. distance on a city map): if you want it to be really fast in average case, but you have to provide good heuristic function.
For some graph problems it may be possible to use Dynamic programming or other algorithmic tools. It depends on a situation. Further information can be found in tutorials from http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=alg_index ...
From Introduction to Algorithms (CLRS) second edition, page 581 :
Find a shortest path from u to v for a given vertices u and v. If we solve the single-source problem with source vertex u, we solve this problem also. Moreover, no algorithms for this problem are known that run asymptotically faster than the best single-source algorithms in the worst case.
So, stick to Dijkstra's algorithm :)
The Wikipedia article spells out the answer for you:
If we are only interested in a shortest path between vertices source and target, we can terminate the search at line 13 if u = target.
You could use Dijkstra's algorithm and optimize it in the way that you stop exploring paths that are already longer than the shortest you found so far. Because those are not going to be shorter (provided that you don't have negative weighs on your edges).
The only difference I could think of for the question is that in the Travelling Salesman Problem (TSP) I need to find a
minimum permutation of all the vertices in the graph and in Shortest Paths problem there is no need to consider all the vertices we can search the states space for minimum path length routes can anyone suggest more differences.
You've already called out the essential difference: the TSP is to find a path that contains a permutation of every node in the graph, while in the shortest path problem, any given shortest path may, and often does, contain a proper subset of the nodes in the graph.
Other differences include:
The TSP solution requires its answer to be a cycle.
The TSP solution will necessarily repeat a node in its path, while a shortest path will not (unless one is looking for shortest path from a node to itself).
TSP is an NP-complete problem and shortest path is known polynomial-time.
If you are looking for a precise statement of the difference I would say you just need to replace your idea of the "permuation" with the more technical and precise term "simple cycle visiting every node in the graph", or better, "Hamilton cycle":
The TSP requires one to find the simple cycle covering every node in the graph with the smallest weight (alternatively, the Hamilton cycle with the least weight). The Shortest Path problem requires one to find the path between two given nodes with the smallest weight. Shortest paths need not be Hamiltonian, nor do they need to be cycles.
With the shortest path problem you consider paths between two nodes. With the TSP you consider paths between all node. This makes the latter much more difficult.
Consider two paths between nodes A and B. One over D the other one of C. Let the one over C be the longer path. In the Shortest Path problem this path can get immediately discarded. In the TSP it is perfectly possible that this path is part of the over all solution, because you'll have to visit C and visiting it later might be even more expensive.
Therefor you can't break down the TSP in similar but smaller sub-problems.
Shortest path is just have source and target.we need to find shortest path between them obviously output must be tree in polynomial-time.
Finding Shortest Path:-
Undirected graphs:
Dijkstra's algorithm with list O(V^2)
Directed graphs with arbitrary weights without negative cycles:
Bellman–Ford algorithm Time complexity O(VE)
Floyd–Warshall's Algorithm is used to find the shortest paths between between all pairs
TSP (Travelling-Salesman Problem) is not like that we have cover every node from source and finally we've reach source at minimum cost.Eventually there must be cycle. TSP is an NP-complete problem
Ref:
https://en.wikipedia.org/wiki/Shortest_path_problem
https://en.wikipedia.org/wiki/Travelling_salesman_problem
http://www.geeksforgeeks.org/travelling-salesman-problem-set-1/
http://www.geeksforgeeks.org/greedy-algorithms-set-6-dijkstras-shortest-path-algorithm/
https://www.hackerearth.com/practice/algorithms/graphs/shortest-path-algorithms/tutorial/
In TSP, you need to both visit all nodes and also return to your starting point. This complicates the problem immensely.
What is dynamic programming algorithm for finding a Hamiltonian cycle in an undirected graph?
I have seen somewhere that there exists an algorithm with O(n.2^n) time complexity.
There is indeed an O(n2n) dynamic-programming algorithm for finding Hamiltonian cycles. The idea, which is a general one that can reduce many O(n!) backtracking approaches to O(n22n) or O(n2n) (at the cost of using more memory), is to consider subproblems that are sets with specified "endpoints".
Here, since you want a cycle, you can start at any vertex. So fix one, call it x. The subproblems would be: “For a given set S and a vertex v in S, is there a path starting at x and going through all the vertices of S, ending at v?” Call this, say, poss[S][v].
As with most dynamic programming problems, once you define the subproblems the rest is obvious: Loop over all the 2n sets S of vertices in any "increasing" order, and for each v in each such S, you can compute poss[S][v] as:
poss[S][v] = (there exists some u in S such that poss[S−{v}][u] is True and an edge u->v exists)
Finally, there is a Hamiltonian cycle iff there is a vertex v such that an edge v->x exists and poss[S][v] is True, where S is the set of all vertices (other than x, depending on how you defined it).
If you want the actual Hamiltonian cycle instead of just deciding whether one exists or not, make poss[S][v] store the actual u that made it possible instead of just True or False; that way you can trace back a path at the end.
I can't pluck out that particular algorithm, but there is more about Hamiltonian Cycles on The Hamiltonian Page than you will likely ever need. :)
This page intends to be a
comprehensive listing of papers,
source code, preprints, technical
reports, etc, available on the
Internet about the Hamiltonian Cycle
and Hamiltonian Path Problems as well
as some associated problems.
(original URL, currently 404), (Internet Archive)