Single Source Shortest Path with constraint lower bound of minimum cost - algorithm

Problem description:
Given a graph G in adjacencyMatrix and adjacencyList, inside which there is a source vertex s and a destination vertex d. Find the shortest path from s to d, with a constraint. The constraint is that the shortest path cost c has a lower bound, i.e., the cost c must be greater than an assigned lower bound N but is the smallest in all the costs of possible paths that are greater or equal N.
I understand with this constraint conventional SSSP algorithm like Bellman ford cannot work correctly. How shall I find a most efficient algorithm for this problem?

I suppose you wanted a walk, since path cannot have cycles.
Unfortunately, the problem can be easily modeled as change-making problem, which is NPC too.
Change-making problem: Given N types of coins of c_i value each, is it possible that number X can be changed with those N coins?
Modelling: Assume all c_i's are even (double all c_i, and also X, if not). Create N + 2 vertices, which the i-th vertex represents the i-th coin for 1 <= i <= N. Also, the (N+1) and (N+2)-th vertices have edge to all coins with cost (c_i / 2). The problem is then equivalent to finding a shortest path of cost at least X, which is NPC. The reduction should be obvious, but if further explanations are needed I can edit my answer.

Related

All paths of length k from a given vertice

Let's say i have a directed graph G(V,E) with edge cost(real number) ∈ (0,1).For given i,I need to find all the couples of vertices (i,j) starting from i that "match".Two vertices (i,j) match if there is a directed path from i to j with length exactly k(k is a given number that is relatively small and could be considered as constant)with cost >=C(C is a given number).Cost of a path is calculated as the product of it's edges.For example if a path starting from i and ending in j of lenght 2 consists edges e1 and e2 then CostOfpath=cost(e1)*cost(e2).
This has to be done in O(E+V*k).So what i thought is modifying the DFS algorithm updating the distances from given starting vertice i until they reach the length of k.If they don't then we can't have a match.However i am having a hard time finding what exactly i can modify in the DFS.Any ideas?
When you need to consider paths with a fixed number of edges in it, dynamic programming often comes to help (while other approaches often fail).
Let's denote dp[v][j] the maximal cost of the path from vertex i (fixed) to vertex v that has exactly j edges.
For a starting values, you can set values for j==1: dp[v][1] is the cost of edge from i to v (or 0 if no such edge exists). Or if you think on it it will be obvious that you can set values for j==0, not j==1: dp[i][0] is 1, while dp[v][0] can be set to zero for v!=i.
Now, if you have values for some j, it is easy to calculate values for j+1:
dp[v][j+1] = max( dp[v'][j] * cost((v', v)) )
This is very similar to Ford-Bellman's algorithm, only that the latter does not need to track the number of edges and thus can use one-dimensional array.
This gives you the solution in O((E+V)*k). Not exactly what you have requested, but I doubt that there exists solution in O(E+V*k).
(In the solution above, I assume that the constant C is positive, and so a zero cost path is equivalent to the path being absolutely absent. If you need, you can specifically account for the C==0 case.)

Find Path of a Specific Weight in a Weighted DAG

Given a DAG where are Edges have a Positive Edge Weight. Given a Value N.
Algorithm to calculate a simple (no cycles or node repetitions) Path with the Total weight N?
I am aware of the Algorithm where we have to find a Path of Given Path Length (number of Edges) but somewhat confused about for the Given Path Weight?
Can Dijkstra be modified for this case? Or anything else?
This is NP-complete, so don't expect any reasonably fast (polynomial-time) algorithm. Here's a reduction from the NP-complete Subset Sum problem, where we are given a multiset of n integers X = {x_1, x_2, ..., x_n} and a number k, and asked if there is any submultiset of the n numbers that sum to exactly k:
Create a graph G with n+1 vertices v_1, v_2, ..., v_{n+1}. For each vertex v_i, add edges to every higher-numbered vertex v_j, and give all these edges weight x_i. This graph has O(n^2) edges and can be constructed in O(n^2) time. Clearly it contains no cycles.
Suppose the answer to the Subset Sum problem is YES: That is, there exists a submultiset Y of X such that the numbers in Y total to exactly k. Actually, let Y = {y_1, y_2, ..., y_m} consist of the m <= n indices 1 <= i <= n of the selected elements of X. Then there is a corresponding path in the graph G with exactly the same weight -- namely the path that starts at v_{y_1}, takes the edge to v_{y_2} (which is of weight x_{y_1}), then takes the edge to v_{y_3}, and so on, finally arriving at v_{y_m} and taking a final edge (which is of weight x_{y_m}) to the terminal vertex v_{n+1}.
In the other direction, suppose that there is a simple path in G of total weight exactly k. Since the path is simple, each vertex appears at most once. Thus each edge in the path leaves a unique vertex. For each vertex v_i in the path except the last, add x_i to a set of chosen numbers: these numbers correspond to the edge weights in the path, so clearly they sum to exactly k, implying that the solution to the Subset Sum problem is also YES. (Notice that the position of the final vertex in the path doesn't matter, since we only care about the vertex that it leaves, and all edges leaving a vertex have the same weight.)
A YES answer to either problem implies a YES answer to the other problem, so a NO answer to either problem implies a NO answer to the other problem. Thus the answer to any Subset Sum problem instance can be found by first constructing the specified instance of your problem in polynomial time, and then using any algorithm for your problem to solve that instance -- so if an algorithm exists that can solve any instance of your problem in polynomial time, the NP-hard Subset Sum problem can also be solved in polynomial time.

Optimal substructure for least number of perfect squares

Question: I know how the recursion works but I can't seem to understand the 'optimal substructure' for this problem which necessitates the use of dynamic programming.
Problem: Find least number of perfect squares that sum upto a given number.
Let's say we want to find the shortest path from U to V. If we have a node X in between then shortest path from U to V will be shortest path from U to X plus shortest path from X to V.
I am having difficulty understanding how the least squares problem follows the optimal substructure property.
To my understanding, the recurrence relation for sum of perfect squares behaved similarly to the recurrence relation for shortest paths in the following way. Let
f(n) := minimum number of perfect squares which sum up to exactly n
then a suitable recurrence can be stated as
f(n) = min{ f(n-i) + f(i) : 0 < i < n }
which means that all partitions of the original argument into two summands have to be taken into account. Intuitively, the 'split point' for the shortest path problem is a node, whereras in the perfect squares problem it is the decision how to partition into summands (which are then examined further).
You did not state the property correctly, the third paragraph should be rephrased this way:
Let's say we want to find the shortest path from U to V. First verify if U and V are linked directly, if not, compute for each node X in between U and V the shortest path from U to X plus shortest path from X to V, the smallest answers will be the shortest paths from U to V, possibly duplicated.
This is applicable for your problem because you can determine the set of nodes X that are between U and V. For U=0 and V=n, this set is all the numbers from 1 to n-1, because you are adding positive numbers.
For the solution, use an array to cache the smallest path from 0 to i for i going from 0 to n, for each new value, a linear search will yield the best solutions, for an overall time complexity of O(n2).
You can optimize the linear search by enumerating only the perfect squares less or equal to n. This list is much shorter that the whole list of numbers. Its length is actually sqrt(n), so the complexity for the overall search drops to O(n3/2).
The cache can be just a pair of integers: the length of the path and the value of an intermediary X that is on one of the shortest paths. This gives a space complexity of O(n).
The problem in question: Find least number of perfect squares that sum up to a given number. has been extensively studied for more than 17 centuries: Lagrange's four-square theorem, also known as Bachet's conjecture was already known to Diophantus in the third century. It states that every natural number can be represented as the sum of four integer squares. Analytical solutions exist for determine whether any given integer is the sum of 1, 2, 3 or 4 perfect squares.

Proving inapproximability for minimum cycle covering

Consider the problem of cycle covering: Given a graph G, we look for a set C of cycles such that all vertex of V(G) are in at least one cycle of C and the number of cycles in C is minimum.
My task is to show that this problem does not admit an absolute approximation, i.e., there cannot be an algorithm H such that for all instances I of the problem, H(I) <= OPT(I) + k, where OPT(I) is the optimal value for I and k is a number greater or equal to 1. The usual technique is to show that if this algorithm existed, we could solve in polinomial time some NP-hard problem.
Does anyone know which problem could be used for that?
Suppose there is an algorithm H such that there is a positive integer k such that for every graph G, H(G)<=OPT(G)+k holds, where OPT(G) denotes the minimum number of cycles necessary to cover all nodes of G and the runtime of H is polynomially bounded in n, where n is the number of nodes of G.
Given any graph G, create a graph G' which consists out of k+1 isomorphic copies of G; note that the number of nodes in G' is (k+1)n, which is polynomially bounded in n. The following two cases can occur:
If G contains a Hamiltonian cycle, then OPT(G')=k+1 and H(G')<=OPT(G')+k=k+1+k=2k+1.
If G does not contain a Hamiltonian cycle, then OPT(G')>=2k+2>2k+1 hence H(G')>2k+1.
In total, H can be used to decide in a runtime bound ponlynomially bounded in n whether G contains a Hamiltonian cycle; however, as the decision whether G has a Hamiltonian cycle is an NP-complete decision problem, this is impossible unless P=NP holds.
Note: This approach is called 'gap creation', as instances are transformed in such a way that there is a gap in the objective value of
optimal solutions of yes-instances;
suboptimal solutions of yes-instances and feasible solutions of no-instances.

Updating Shortest path distances matrix if one edge weight is decreased

We are given a weighed graph G and its Shortest path distance's matrix delta. So that delta(i,j) denotes the weight of shortest path from i to j (i and j are two vertexes of the graph).
delta is initially given containing the value of the shortest paths. Suddenly weight of edge E is decreased from W to W'. How to update delta(i,j) in O(n^2)? (n=number of vertexes of graph)
The problem is NOT computing all-pair shortest paths again which has the best O(n^3) complexity. the problem is UPDATING delta, so that we won't need to re-compute all-pair shortest paths.
More clarified : All we have is a graph and its delta matrix. delta matrix contains just value of the shortest path. now we want to update delta matrix according to a change in graph: decreased edge weight. how to update it in O(n^2)?
If edge E from node a to node b has its weight decreased, then we can update the shortest path length from node i to node j in constant time. The new shortest path from i to j is either the same as the old one or it contains the edge from a to b. If it contains the edge from a to b, then its length is delta(i, a) + edge(a,b) + delta(b, j).
From this the O(n^2) algorithm to update the entire matrix is trivial, as is the one dealing with undirected graphs.
http://dl.acm.org/citation.cfm?doid=1039488.1039492
http://dl.acm.org.ezp.lib.unimelb.edu.au/citation.cfm?doid=1039488.1039492
Although they both consider increase and decrease. Increase would make it harder.
On the first one, though, page 973, section 3 they explain how to do a decrease-only in n*n.
And no, the dynamic all pair shortest paths can be done in less than nnn. wikipedia is not up to date I guess ;)
Read up on Dijkstra's algorithm. It's how you do these shortest-path problems, and runs in less than O(n^2) anyway.
EDIT There are some subtleties here. It sounds like you're provided the shortest path from any i to any j in the graph, and it sounds like you need to update the whole matrix. Iterating over this matrix is n^2, because the matrix is every node by every other, or n*n or n^2. Simply re-running Dijkstra's algorithm for every entry in the delta matrix will not change this performance class, since n^2 is greater than Dijkstra's O(|E|+|V|log|V|) performance. Am I reading this properly, or am I misremembering big-O?
EDIT EDIT It looks like I am misremembering big-O. Iterating over the matrix would be n^2, and Dijkstra's on each would be an additional overhead. I don't see how to do this in the general case without figuring out exactly which paths W' is included in... this seems to imply that each pair should be checked. So you either need to update each pair in constant time, or avoid checking significant parts of the array.

Resources