NP complete Theory: Long Path - algorithm

The question is basically to show that for any unweighted graph G(V,E), if we could find a simple path as big as floor(|V|/2), we could compute hamiltonian paths.
Basically that Hamiltonian Paths are polynomial time reducible to the long path problem.
I have tried to find a graph in which a path of size |v|/2 would map to another graph's Hamiltonian Path. Yet I have not gotten anywhere with that approach.
Maybe there is a way to prove there is a finite number of paths grater than length |V|/2 for any graph, which would mean we could just repeat our long path algorithm several times to find our Hamiltonian Paths. But I am not sure of this.

As a hint, suppose that you want to find a Hamiltonian path in a graph with n nodes. What happens if you create a new graph that's two independent copies of the original graph and ask whether that new graph has a Hamiltonian path going through n nodes?
Hope this helps!

Related

Given a weighted undirected graph, how do I find a path which has the total weights close to a given value?

Suppose I have a weighted, undirected graph. Each edge has a positive weight. I would like to find a simple path (no vertices appear in the path twice) from a given source node (s) to a target node (t) which has the total sum of weights close to a given value (P).
Even though it sounds like a well-studied problem, I couldn't find a satisfying solution. Many graph algorithms are aiming to find the shortest path (in a sense of steps or cost), but not to find the "matched" cost path.
A naive solution would be finding all paths from s to t, compute sum of weights for each path and select the one that is close to P. However, finding all paths between two nodes in a graph is known to be #P-hard.
A possible solution could be modified the A* algorithm, so that for each node in the frontier we get the cost from the root to that node (g), and estimate the cost from that node to the goal (h). Then instead of choosing a node with the smallest g+h, we choose a node with the smallest |P - (g+h)|. However, I am not sure if this is the best solution.
Another thought is inspired from the linear programming since the objective function of this problem is sum(weights of a path from s to t) - P = 0. I know the shortest path problem can be formed as a linear programming task but not sure how to formulate this problem as a one.
Please help, thanks in advance!
This problem is NP-hard via a reduction from the Hamiltonian path problem. In that problem, you are given a graph and a pair of nodes s and t and are asked whether there's a simple path from s to t that passes through all the nodes in the graph. You can solve the Hamiltonian path problem via your problem as follows:
Assign each edge in the graph weight 1.
Find the s-t simple path whose weight is as close to n-1 as possible, where n is the number of nodes in the graph.
Returns whether this path has cost exactly n-1.
If the graph has a Hamiltonian path, then that path will have cost n-1. Otherwise, it doesn't, and the best path found will have a cost that's lower than n-1.

need to detect a ending path in the graph with minimilistic weight

Would like to find the shortest weighted path from the node a to any node. The destination node is not given. One can visit any vertex many time.
if the path weight's should be less than Integer.MAX
what is the algorithm to proceed ..?? cant detect the algorithm itself.
i did try for travelling salesman problem but it doesn't match ; neither it match for Dijkstra ...
how to keep in memory all the paths is the major challenge here ..
Edit: Graph is not directed one and there are no negative wieghts.
references: http://en.wikipedia.org/wiki/Cycle_detection#Tortoise_and_hare
http://en.wikipedia.org/wiki/Travelling_salesman_problem
For an undirected graph with no negative weights, it's possible to use Floyd-Warshall algorithm to find all-pairs shortest path with time complexity of O(V^3), where V is a number of vertices. The algorithm also allows an easy way to reconstruct the all computed shortest paths. It uses O(V^2) memory to store distances and additional information for reconstructing the path.
There are also some other algorithms for solving this problem with better time complexity, but Floyd-Warshall is really easy to code and start with.
Would like to find the shortest weighted path from the node a to any
node.
I immediately think Dijkstra's algorithm, because that's exactly what it does.
how to keep in memory all the paths is the major challenge here ..
For each node in the graph, keep track of it's previous node that would result in the shortest path. Take a look at the pseudocode, line 20.
To build the path, simply go backwards from the destination node until you reach the source node.

Why cannot we turn longest path into shortest graph?

Today I read on Introduction to Algorithms of the longest path problem, which asked in a weighted, directed graph what is the longest simple path passing two vertices. The author used a wonderful example showing that dynamic programming fails for the longest path problem because there does not exist a nice optimal structure that always comes with an optimal substructure. It was commented that this problem is actually NP-complete. So it must be really hard.
Now here is my question: Instead of assign every edge a positive weight k>0, what if we simply assign negative weights to each edge with weight-k? Then each "longest path" would automatically be the shortest path, and if there is no loops in the shortest path by definition, there should not be any loops in the corresponding longest path. Hence using a quite common trick we "can" turn the longest path problem into the shortest path problem.
Can someone point out my mistake in the reasoning? I figure something must be wrong but do not really know what it is. It is extremely unlikely my argument is correct, for obvious reasons. Reading the pseudo code provided in the book, it seems the algorithm for shortest path does not prohibit using negative weights, and after all everything can be "elevated" by adding a large enough constant to make them positive. I am a beginner in algorithms so this question might be trivial to experts.
The mistake in your reasoning is this line:
if there is no loops in the shortest path by definition, there should not be any loops in the corresponding longest path.
If you have a graph where all edges have negative weight (which can happen in the transformation you're describing) and there's a negative cycle, then there is no shortest path between any two nodes on that cycle, since any path between them can have its cost reduced by following the cycle more and more times. Since there is no shortest path in that case, your reasoning breaks down.
Now, you can argue that you should instead look for the shortest simple path between the nodes (that is, a path with no duplicated edges). Unfortunately, though, that problem is also NP-hard, so this reduction doesn't actually buy you anything.
Hope this helps!
if there is no loops in the shortest path by definition, there should
not be any loops in the corresponding longest path
...and so that particular original problem is solvable in polynomial time. And if the particular original problem is a DAG then this method solves it in linear time.
The complexity statement doesn't say that all graphs are that hard to solve, just that some are.
From your own link:
In contrast to the shortest path problem, which can be solved in
polynomial time in graphs without negative-weight cycles, the longest
path problem is NP-hard, meaning that it cannot be solved in
polynomial time for arbitrary graphs unless P = NP.
[...]
For most graphs, this transformation is not useful because it creates
cycles of negative length in −G. But if G is a directed acyclic graph,
then no negative cycles can be created, and longest paths in G can be
found in linear time by applying a linear time algorithm for shortest
paths in −G, which is also a directed acyclic graph.

what whould be suitable algorithm?

I am trying to do c++ program.I am trying to do problem in which i have numbers of points. Now i need to find the path that goes through all the points. This is not actually TSP because as per my knowledge in TSP it is possible to travel from all points to every other points. But in my case the path network between the points is fixed and i just need to find the suitable path that goes through all the points provided that all points may not have connection to every other point..so what algorithm am i supposed to follow.
It seems you are looking for a way to traverse a graph? If so have you tried Breadth first search http://en.wikipedia.org/wiki/Breadth-first_search or Depth first search http://en.wikipedia.org/wiki/Depth-first_search to traverse your graph.
You want to find a Hamiltonian path for a graph.
In the mathematical field of graph theory, a Hamiltonian path (or
traceable path) is a path in an undirected graph that visits each
vertex exactly once. A Hamiltonian cycle (or Hamiltonian circuit) is a
Hamiltonian path that is a cycle. Determining whether such paths and
cycles exist in graphs is the Hamiltonian path problem, which is
NP-complete.
Some techniques that exist :
There are n! different sequences of vertices that might be Hamiltonian
paths in a given n-vertex graph (and are, in a complete graph), so a
brute force search algorithm that tests all possible sequences would
be very slow. There are several faster approaches. A search procedure
by Frank Rubin divides the edges of the graph into three classes:
those that must be in the path, those that cannot be in the path, and
undecided. As the search proceeds, a set of decision rules classifies
the undecided edges, and determines whether to halt or continue the
search. The algorithm divides the graph into components that can be
solved separately. Also, a dynamic programming algorithm of Bellman,
Held, and Karp can be used to solve the problem in time O(n2 2n). In
this method, one determines, for each set S of vertices and each
vertex v in S, whether there is a path that covers exactly the
vertices in S and ends at v. For each choice of S and v, a path exists
for (S,v) if and only if v has a neighbor w such that a path exists
for (S − v,w), which can be looked up from already-computed
information in the dynamic program.
Andreas Björklund provided an alternative approach using the
inclusion–exclusion principle to reduce the problem of counting the
number of Hamiltonian cycles to a simpler counting problem, of
counting cycle covers, which can be solved by computing certain matrix
determinants. Using this method, he showed how to solve the
Hamiltonian cycle problem in arbitrary n-vertex graphs by a Monte
Carlo algorithm in time O(1.657n); for bipartite graphs this algorithm
can be further improved to time O(1.414n).
For graphs of maximum degree three, a careful backtracking search can
find a Hamiltonian cycle (if one exists) in time O(1.251n).

What is the difference between Travelling Salesman and finding Shortest Path?

The only difference I could think of for the question is that in the Travelling Salesman Problem (TSP) I need to find a
minimum permutation of all the vertices in the graph and in Shortest Paths problem there is no need to consider all the vertices we can search the states space for minimum path length routes can anyone suggest more differences.
You've already called out the essential difference: the TSP is to find a path that contains a permutation of every node in the graph, while in the shortest path problem, any given shortest path may, and often does, contain a proper subset of the nodes in the graph.
Other differences include:
The TSP solution requires its answer to be a cycle.
The TSP solution will necessarily repeat a node in its path, while a shortest path will not (unless one is looking for shortest path from a node to itself).
TSP is an NP-complete problem and shortest path is known polynomial-time.
If you are looking for a precise statement of the difference I would say you just need to replace your idea of the "permuation" with the more technical and precise term "simple cycle visiting every node in the graph", or better, "Hamilton cycle":
The TSP requires one to find the simple cycle covering every node in the graph with the smallest weight (alternatively, the Hamilton cycle with the least weight). The Shortest Path problem requires one to find the path between two given nodes with the smallest weight. Shortest paths need not be Hamiltonian, nor do they need to be cycles.
With the shortest path problem you consider paths between two nodes. With the TSP you consider paths between all node. This makes the latter much more difficult.
Consider two paths between nodes A and B. One over D the other one of C. Let the one over C be the longer path. In the Shortest Path problem this path can get immediately discarded. In the TSP it is perfectly possible that this path is part of the over all solution, because you'll have to visit C and visiting it later might be even more expensive.
Therefor you can't break down the TSP in similar but smaller sub-problems.
Shortest path is just have source and target.we need to find shortest path between them obviously output must be tree in polynomial-time.
Finding Shortest Path:-
Undirected graphs:
Dijkstra's algorithm with list O(V^2)
Directed graphs with arbitrary weights without negative cycles:
Bellman–Ford algorithm Time complexity O(VE)
Floyd–Warshall's Algorithm is used to find the shortest paths between between all pairs
TSP (Travelling-Salesman Problem) is not like that we have cover every node from source and finally we've reach source at minimum cost.Eventually there must be cycle. TSP is an NP-complete problem
Ref:
https://en.wikipedia.org/wiki/Shortest_path_problem
https://en.wikipedia.org/wiki/Travelling_salesman_problem
http://www.geeksforgeeks.org/travelling-salesman-problem-set-1/
http://www.geeksforgeeks.org/greedy-algorithms-set-6-dijkstras-shortest-path-algorithm/
https://www.hackerearth.com/practice/algorithms/graphs/shortest-path-algorithms/tutorial/
In TSP, you need to both visit all nodes and also return to your starting point. This complicates the problem immensely.

Resources