I have tried Dijkstra's algorithm to find maxim cost but I am stuck in that how to tackle with infinity problem. Kindly do share any precise algorithm for this task.
If you want to modify Dijkstra's algorithm in order to find the longest path - than it's impossible (unless P = NP):
https://cs.stackexchange.com/questions/17980/is-it-possible-to-modify-dijkstra-algorithm-in-order-to-get-the-longest-path
Related
This question is for my final year project. This project is all about the recommendation of a safe route to the user to avoid accident-prone street. For this purpose, we are looking for an algorithm which has better time complexity as well as space complexity than the Dijsta.
Assuming you can phrase this problem as:
finding a path
in a directed graph
with non-negative weights
You can solve it with Thorup [2004]
This particular algorithm claims to perform in O(E + V * log log V)
An example implementation can be found here
A complete algorithm is an algorithm which finds a solution if there is any.
A optimal algorithm is an algorithm which any solution its returns is optimal or in other words there exist no better solution than the returned one.
That means optimality is based on completness, right?
Which means an algorithm can not be optimal but not complete. Or did i get it wrong?
The algorithm that always returns nothing, is optimal but not at all complete.
No, optimalness is not based on completness:
Imagine an complete algorithm that finds a solution if there is any for a family of problems and an optimal algorithm which finds an optimal solution. Now the complete algorithm finds a solution for all problems of the family. However the optimal algorithm might solve only one specific problem of the family.
In other words: The optimal algorithm gives you no kind of guarantee on how many problems he could solve.
If for example your algorithm would multiply two numbers. Now your complete algorithm will return an answer for every a and b you might want to multiply.
Your optimal algorithm might now compute the optimal solution for two specific values for a and b and simply return no solution for all other values.
The existence of an optimal algorithm predicates on the fact that the algorithm has found an optimal solution if one exists. Therefore an optimal algorithm must be complete. See here for a post that already contains the answer.
I was revisiting my notes on Dynamic Programming. Its basically a memoized recursion technique, which stores away the solutions to smaller subproblems for later reuse in computing solutions to relatively larger sub problems.
The question I have is that in order to apply DP to a recursive problem, it must have an optimal substructure. This basically necessitates that an optimal solution to a problem contains optimal solution to subproblems.
Is it possible otherwise ? I mean have you ever seen a case where optimal solution to a problem does not contain an optimal solution to subproblems.
Please share some examples, if you know to deepen my understanding.
In dynamic programming a given problems has Optimal Substructure Property if optimal solution of the given problem can be obtained by using optimal solutions of its sub problems.
For example the shortest path problem has following optimal substructure property: If a node X lies in the shortest path from a source node U to destination node V then the shortest path from U to V is combination of shortest path from U to X and shortest path from X to V.
But Longest path problem doesn’t have the Optimal Substructure property.
i.e the longest path between two nodes doesn't have to be the longest path between the in between nodes.
For example, the longest path q->r->t is not a combination of longest path from q to r and longest path from r to t, because the longest path from q to r is q->s->t->r.
So here: optimal solution to a problem does not contain an optimal solution to the sub problems.
For more details you can read
Longest path problem from wikipedia
Optimal substructure from wikipedia
You're perfectly right that the definitions are imprecise. DP is a technique for getting algorithmic speedups rather than an algorithm in itself. The term "optimal substructure" is a vague concept. (You're right again here!) To wit, every loop can be expressed as a recursive function: each iteration solves a subproblem for the successive one. Is every algorithm with a loop a DP? Clearly not.
What people actually mean by "optimal substructure" and "overlapping subproblems" is that subproblem results are used often enough to decrease the asymptotic complexity of solutions. In other words, the memoization is useful! In most cases the subtle implication is a decrease from exponential to polynomial time, O(n^k) to O(n^p), p<k or similar.
Ex: There is an exponential number of paths between two nodes in a dense graph. DP finds the shortest path looking at only a polynomial number of them because the memos are extremely useful in this case.
On the other hand, Traveling salesman can be expressed as a memoized function (e.g. see this discussion), where the memos cause a factor of O( (1/2)^n ) time to be saved. But, the number of TS paths through n cities, is O(n!). This is so much bigger that the asymptotic run time is still super-exponential: O(n!)/O(2^n) = O(n!). Such an algorithm is generally not called a Dynamic Program even though it's following very much the same pattern as the DP for shortest paths. Apparently it's only a DP if it gives a nice result!
To my understanding, this 'optimal substructure' property is necessary not only for Dynamic Programming, but to obtain a recursive formulation of the solution in the first place. Note that in addition to the Wikipedia article on Dynamic Programming, there is a separate article on the optimal substructure property. To make things even more involved, there is also an article about the Bellman equation.
You could solve the Traveling Salesman Problem, choosing the nearest city at each step, but it's wrong method.
The whole idea is to narrow down the problem into the relatively small set of the candidates for optimal solution and use "brute force" to solve it.
So it better be that solutions of the smaller sub-problem should be sufficient to solve the bigger problem.
This is expressed via a recursion as function of the optimal solution of smaller sub-problems.
answering this question:
Is it possible otherwise ? I mean have you ever seen a case where
optimal solution to a problem does not contain an optimal solution to
subproblems.
no it is not possible, and can even be proven.
You can try to implement dynamic programming on any recursive problem but you will not get any better result if it doesn't have optimal substructure property. In other words dynamic programming methodology is not useful to implement on a problem which doesn't have optimal substructure property.
I read this article, it suggests (page 1025 last paragraph) that there is a polynomial time algorithm to find the optimum of a k-tsp problem using binary search.
Using binary search would suggest there exists an algorithm for checking if a solution exists with cost<X and this algorithm is used for the binary search.
I 'googled' around for this and the only algorithm i could find was a non deterministic one (which is pretty trivial), but obviously i'm looking for a deterministic one.
I am interested in this for learning purposes,
Any help/links would be appreciated.
EDIT
I am referring to finding the value of the optimal solution and not about finding the solution itself.
Since TSP is a special case of k-TSP where k = number of nodes in the graph. If you had a solution for "what's the cheapest k-TSP route" in polynomial in relation to graph size, then you'd have a polynomial solution to decision problem version of TSP which would imply that P = NP.
So the answer is no. Deterministic polynomial algorithm for both decision problem and optimization version (they're essentially the same) of k-TSP doesn't exist (yet).
The paper you mentioned proposes a polynomial-time approximation algorithm for the directed k-TSP problem.
Approximation algorithms are those which are guaranteed to yield solutions with a limited deviation from the optimal solution value. There are examples of polynomial-time approximation algorithms for NP-Hard problems: the Christofides Algorithm yields, in time O(n³), solutions to the metric TSP problem whose values are at most 3/2 the value of the optimal solution.
David Karger, in a lecture (link)
mentions a randomized algorithm for efficient k-TSP problem which runs in polynomial time in n (but exponential in k). It is based on the idea of color coding: color each of the node with a random color in [1..k], and find a shortest chromatic path (where each color appears exactly once). With a simple dynamic programming algorithm, this approach gives a runtime of O(n^2 2^k) and it succeeds (in finding the path with minimal cost) with probability e^{-k}. By repeating e^k times, one achieves an algorithm that finds the minimum k-TSP with high probability.
Can anybody tell me, how can I compare TSP Optimal and heuristics? I have implemented TSP but don't know how can I compare them. Infact, how can I find the optimal cost of the TSP? Any method or guess?
Thanks
Check the optimal solution with well-known benchmark instances:
Download the data from TSPLIB here and compare your solutions with the optimal values here
Solving the TSP to optimality is an NP-hard problem.
To assess the quality of a heuristic solution, you have several options:
Compare it to heuristic solutions produced by other algorithms. This will give you an idea of which heuristics work better on the given instance, but obviously won't tell you anything about how close you are to the optimal solution.
Compare to the optimal solution. Concorde is probably your best bet for computing this.
Compute a lower bound for the TSP instance, and compare the heuristic solution to that. The two standard approaches are the Held-Karp lower bound and the assignment problem relaxation.
Use instances with known optimal solutions, such as those in TSPLIB.