Find a path in a complete graph with cost limit and max reward - algorithm

I'm looking for an algorithm to solve this problem. I have to implement it (so I need a not np solution XD)
I have a complete graph with a cost on each arch and a reward on each vertex. I have only a start point, but it doesn't matter the end point, becouse the problem is to find a path to see as many vertex as possible, in order to have the maximum reward possible, but subject to a maximum cost limit. (for this reason it doesn't matter the end position).
I think to find the optimum solution is a np-hard problem, but also an approximate solution is apprecciated :D
Thanks
I'm trying study how to solve the problem with branch & bound...
update: complete problem dscription
I have a region in which there are several areas identify by its id and x,y,z position. Each vertex identifies one ot these areas. The maximum number of ares is 200.
From a start point S, I know the cost, specified in seconds and inserted in the arch (so only integer values), to reach each vertex from each other vertex (a complete graph).
When I visit a vertex I get a reward (float valiues).
My objective is to find a paths in a the graph that maximize the reward but I'm subject to a cost constraint on the paths. Indeed I have only limited minutes to complete the path (for example 600 seconds.)
The graph is made as matrix adjacency matrix for the cost and reward (but if is useful I can change the representation).
I can visit vertex more time but with one reward only!

Since you're interested in branch and bound, let's formulate a linear program. Use Floyd–Warshall to adjust the costs minimally downward so that cost(uw) ≤ cost(uv) + cost(vw) for all vertices u, v, w.
Let s be the starting vertex. We have 0-1 variables x(v) that indicate whether vertex v is part of the path and 0-1 variables y(uv) that indicate whether the arc uv is part of the path. We seek to maximize
sum over all vertices v of reward(v) x(v).
The constraints unfortunately are rather complicated. We first relate the x and y variables.
for all vertices v ≠ s, x(v) - sum over all vertices u of y(uv) = 0
Then we bound the cost.
sum over all arcs uv of cost(uv) y(uv) ≤ budget
We have (pre)flow constraints to ensure that the arcs chosen look like a path possibly accompanied by cycles (we'll handle the cycles shortly).
for all vertices v, sum over all vertices u of y(uv)
- sum over all vertices w of y(vw)
≥ -1 if v = s
0 if v ≠ s
To handle the cycles, we add cut covering constraints.
for all subsets of vertices T such that s is not in T,
for all vertices t in T,
x(t) - sum over all vertices u not in T and v in T of y(uv) ≥ 0
Because of the preflow constraints, a cycle necessarily is disconnected from the path structure.
There are exponentially many cut covering constraints, so when solving the LP, we have to generate them on demand. This means finding the minimum cut between s and each other vertex t, then verifying that the capacity of the cut is no greater than x(t). If we find a violation, then we add the constraint and use the dual simplex method to find the new optimum (repeat as necessary).
I'm going to pass on describing the branching machinery – this should be taken care of by your LP solver anyway.

Finding the optimal solution
Here is a recursive approach to solving your problem.
Let's begin with some definitions :
Let A = (Ai)1 ≤ i ≤ N be the areas.
Let wi,j = wj,i the time cost for traveling from Ai to Aj and vice versa.
Let ri the reward for visiting area Ai
Here is the recursive procedure that will output the exact requested solution : (pseudo-code)
List<Area> GetBestPath(int time_limit, Area S, int *rwd) {
int best_reward(0), possible_reward(0), best_fit(0);
List<Area> possible_path[N] = {[]};
if (time_limit < 0) {
return [];
}
if (!S.visited) {
*rwd += S.reward;
S.visit();
}
for (int i = 0; i < N; ++i) {
if (S.index != i) {
possible_path[i] = GetBestPath(time_limit - W[S.index][i], A[i], &possible_reward);
if (possible_reward > best_reward) {
best_reward = possible_reward;
best_fit = i;
}
}
}
*rwd+= best_reward;
possible_path[best_fit].push_front(S);
return possible_path[best_fit];
}
For obvious clarity reasons, I supposed the Ai to be globally reachable, as well as the wi,j.
Explanations
You start at S. First thing you do ? Collect the reward and mark the node as visited. Then you have to check which way to go is best between the S's N-1 neighbors (lets call them NS,i for 1 ≤ i ≤ N-1).
This is the exact same thing as solving the problem for NS,i with a time limit of :
time_limit - W(S ↔ NS,i)
And since you mark the visited nodes, when arriving at an area, you first check if it is marked. If so you have no reward ... Else you collect and mark it as visited ...
And so forth !
The ending condition is when time_limit (C) becomes negative. This tells us we reached the limit and cannot proceed to further moves : the recursion ends. The final path may contain useless journeys if all the rewards have been collected before the time limit C is reached. You'll have to "prune" the output list.
Complexity ?
Oh this solution is soooooooo awful in terms of complexity !
Each calls leads to N-1 calls ... Until the time limit is reached. The longest possible call sequence is yielded by going back and forth each time on the shortest edge. Let wmin be the weight of this edge.
Then obviously, the overall complexity is bounded by NC/wmin.C/wmin.
This is huuuuuge.
Another approach
Maintain a hash table of all the visited nodes.
On the other side, maintain a Max-priority queue (eg. using a MaxHeap) of the nodes that have not been collected yet. (The top of the heap is the node with the highest reward). The priority value for each node Ai in the queue is set as the couple (ri, E[wi,j])
Pop the heap : Target <- heap.pop().
Compute the shortest path to this node using Dijkstra algorithm.
Check out the path : If the cost of the path is too high, then the node is not reachable, add it to the unreachable nodes list.
Else collect all the uncollected nodes that you find in it and ...
Remove each collected node from the heap.
Set Target as the new starting point.
In either case, proceed to step 1. until the heap is empty.
Note : A hash table is the best suited to keep track of the collected node. This way, we can check a node in a path computed using Dijkstra in O(1).
Likewise, maintaining a hashtable leading to the position of each node in the heap might be useful to optimise the "pruning" of the heap, when collecting the nodes along a path.
A little analysis
This approach is slightly better than the first one in terms of complexity, but may not lead to the optimal result. In fact, it can even perform quite poorly on some graph configurations. For example, if all nodes have a reward r, except one node T that has r+1 and W(N ↔ T) = C for every node N, but the other edges would be all reachable, then this will only make you collect T and miss every other node. In this particular case, the best solution would have been to ignore T and collect everyone else leading to a reward of (N-1).r instead of only r+1.

Related

Maximize sum of values of K courses if all prerequisites must be met

Background: The following problem occurred to me when I was trying to come up with a hiring challenge for software engineers in my company. I quickly realized that it was probably too difficult since I couldn't come up with a good solution myself :-) Still, I would be interested to know if somebody else can think of an efficient algorithm.
Consider a worker who is deciding which of a selection of continuing education courses to take. Each course will teach them a skill, which will boost their salary by a certain known amount. The worker wants the maximum salary boost, but there are some constraints to consider: Each course takes a month to complete, and the worker can only take one course at a time. They only have K months available for their education. Furthermore, some courses have prerequisites and can only been taken if the worker already completed all of the prerequisite courses. How can the worker find the curriculum that will give them the maximum raise?
More formally, consider a directed acyclic graph with N nodes, which have values v[0], ..., v[N-1] (all positive), and a list of M directed edges E = [(a[0],b[0]), ..., (a[M-1],b[M-1])]. For simplicity we can assume topological order (i.e. 0 <= a[i] < b[i] < N for i = 0, ..., M-1). What is the maximum sum of values of K nodes if we can only select a node if all of its ancestors in the DAG have been selected?
We can trivially solve this problem in O(M + K * N^K) by looping over all size-K subsets and checking if all prerequisites are met. A simple Python implementation would be as follows:
def maxSumNodes(v, E, K):
# For each node, compute all of its direct ancestors in the DAG
N = len(v)
ancestors = [[] for i in range(N)]
for a, b in E:
ancestors[b].append(a)
maxSumValues = 0
for r in itertools.combinations(range(N), K):
sumValues = sum(v[i] for i in r)
nodesSelected = set(r)
if all(all(x in nodesSelected for x in ancestors[y]) for y in r):
maxSumValues = max(maxSumValues, sumValues)
return maxSumValues
However, this becomes prohibitively expensive if K is large (e.g. N = 1,000,000, K = 500,000). Is there a polynomial algorithm in N that works for any K? Alternatively, can it be proven that the problem is NP-hard?
I found this algorithm, which only compares all k-sets with valid requirements
class Node(val value: Int, val children: List<Node> = emptyList())
fun maximise(activeNodes: List<Node>, capacity: Int) : Int {
if(capacity == 0 || activeNodes.isEmpty()) return 0
return activeNodes.maxOf { it.value + maximise(activeNodes - it + it.children, capacity - 1) }
}
val courses = listOf(Node(value = 1, children = listOf(Node(value = 20))), Node(value = 5))
val months = 2
maximise(courses, months)
(Building a DAG isn't an issue, so I'm just assuming my input is already in DAG form)
This algorithm will perform better than yours if there are lots of requirements. However, the worst case for this algorithm (no requirements) boils down to checking each possible k-set, meaning O(N^K).
The problem certainly looks NP-hard, but proving it is complicated. The closest to a proof I got is transforming it into a modified knapsack problem (which is NP-hard) (also mentioned by #user3386109 in the comments):
Start with a list of all paths in the DAG with their combined values as value and their length as weight. Now start solving your knapsack-problem, but whenever an item is picked,
remove all subsets of that path
modify all supersets of that path as following: reduce value by that paths value, reduce weight by that paths weight.
I think that makes this problem at least as hard as knapsack, but I can't prove it.
This problem is NP-Complete. You can show that the problem of deciding, for a given vertex-weighted DAG and integers k and M, whether there exists a valid schedule with k tasks of total value at least M, is NP-Complete. The proof uses gadgets and a strategy borrowed from some of the first hardness proofs for scheduling of Garey and Johnson.
In fact, I'll prove the stronger statement that even restricting all weights to be 1 or 2 and the DAG to have a longest path and maximum in-degree of at most 2, the problem is NP-Hard.
First, the problem is in NP: Given a witness (schedule) of k tasks, we can test in linear time that the schedule is valid and has a value at least M.
You can give a reduction from CLIQUE: Suppose we're given an instance of CLIQUE, which is an undirected simple graph G with vertices V = {v1, v2...vq} and edges E = {e1, e2,...er} along with an integer p, and we're asked whether G has a clique of size p.
We transform this into the following DAG-scheduling instance:
Our vertex set (tasks) is {V1, V2...Vq} U {E1, E2,...Er}
All tasks Vi have value 1 for 1 <= i <= q
All tasks Ej have value 2 for 1 <= j <= r
There is an arc (Vi, Ej) in our DAG whenever vi is an endpoint of ej in G.
k = p(p+1)/2
M = p^2
The precedence constraints/DAG edges are precisely the requirement that we cannot perform an edge task Ej until we've completed both vertex-tasks corresponding to its endpoints. We have to show that there is a valid schedule of k tasks with value at least M in our DAG if and only if G has a clique of size p.
Suppose we have a valid schedule of k tasks with value at least M. Since M is p^2, and k = p(p+1)/2, one way of reaching this value is by completing at least p(p-1)/2 edge tasks, which have value 2, in addition to any p other tasks. In fact, we must complete at least this many edge tasks to reach value M:
Letting E_completed be the set of edge tasks in our schedule and V_completed be the set of vertex tasks in our schedule, we get that
2|E_completed| + |V_completed| >= p^2 and |E_completed| + |V_completed| = p(p+1)/2. Substituting variables and rearranging, we get that |V_completed| <= p, meaning we've completed at most p vertex tasks and thus at least p(p-1)/2 edge tasks.
Since G is a simple graph, for p(p-1)/2 edges to have both endpoints covered by vertices, we must use at least p vertices. So it must be that exactly p vertex tasks were completed, and thus that exactly p(p-1)/2 edge tasks were completed, which, by the precedence constraints, imply that those p corresponding vertices form a clique in G.
For proving the other direction, it's immediate that given a clique in G of size p, we can choose the p corresponding vertex tasks and p(p-1)/2 corresponding edge tasks to get a schedule of k tasks of value M. This concludes the proof.
Some closing notes on this fascinating problem and the winding journey of trying to solve it efficiently:
CLIQUE is, in a sense, among the hardest of the NP-Complete problems, so a reduction from CLIQUE means that this problem inherits its hardness: the DAG scheduling problem is hard-to-approximate and fixed-parameter intractable, making it harder than Knapsack in that sense.
Because the DAG scheduling only depends on reachability constraints, it seemed at first that an efficient solution could be possible. You can, in practice, take the transitive reduction of your graph, and then use a topological ordering where vertices are also ordered by their depth (longest path ending at a vertex).
In practice, many interesting instances of this problem could be efficiently solvable (which is one reason I intuitively suspected the problem was not NP-Hard). If the DAG is deep and some vertices have many ancestors, rather than at most 2, we can do a depth-first search backwards through the topological order. A guess that one vertex is in our schedule forces all of its ancestors to also be in the schedule. This cuts the graph size considerably (possibly into many disconnected, easily solved components) and reduces k by the number of ancestors as well.
Overall, a very interesting problem-- I was unable to find the exact problem elsewhere, or even much information about the relevant structural properties of DAGs. However, CLIQUE can be used for many scheduling problems: the terminology and use of vertex and edge tasks is adapted from Garey and Johnson's 1976 hardness proof, (which I highly recommend reading) of a superficially different problem: scheduling an entire (unweighted) DAG, but where tasks have variable deadlines and unit processing time, and we only want to know whether we can make at most k tasks late for their deadlines.

Shortest path from one source which goes through N edges

In my economics research I am currently dealing with a specific shortest path problem:
Given a directed deterministic dynamic graph with weights on the edges, I need to find the shortest path from one source S, which goes through N edges. The graph can have cycles, the edge weights could be negative, and the path is allowed to go through a vertex or edge more than once.
Is there an efficient algorithm for this problem?
One possibility would be:
First find the lowest edge-weight in the graph.
And then build a priority queue of all paths from the starting edge (initially an empty path from starting point) where all yet-to-be-handled edges are counted as having the lowest weight.
Main loop:
Remove path with lowest weight from the queue.
If path has N edges you are done
Otherwise add all possible one-edge extensions of that path to priority queue
However, that simple algorithm has a flaw - you might re-visit a vertex multiple times as i:th edge (visiting as 2nd and 4th is ok, but 4th in two different paths is the issue), which is inefficient.
The algorithm can be improved by skipping them in the 3rd step above, since the priority queue guarantees that the first partial path to the vertex had the lowest weight-sum to that vertex, and the rest of the path does not depend on how you reached the vertex (since edges and vertices can be duplicated).
The "exactly N edges" constraint makes this problem much easier to solve than if that constraint didn't exist. Essentially you can solve N = 0 (just the start node), use that to solve N = 1 (all the neighbors of the start node), then N = 2 (neighbors of the solution to N = 1, taking the lowest cost path for nodes that are are connected to multiple nodes), etc.
In pseudocode (using {field: val} to mean "a record with a field named field with value val"):
# returns a map from node to cost, where each key represents
# a node reachable from start_node in exactly n steps, and the
# associated value is the total cost of the cheapest path to
# that node
cheapest_path(n, start_node):
i = 0
horizon = new map()
horizon[start_node] = {cost: 0, path: []}
while i <= n:
next_horizon = new map()
for node, entry in key_value_pairs(horizon):
for neighbor in neighbors(node):
this_neighbor_cost = entry.cost + edge_weight(node, neighbor, i)
this_neighbor_path = entry.path + [neighbor]
if next_horizon[neighbor] does not exist or this_neighbor_cost < next_horizon[neighbor].cost:
next_horizon[neighbor] = {cost: this_neighbor_cost, path: this_neighbor_path}
i = i + 1
horizon = next_horizon
return horizon
We take account of dynamic weights using edge_weight(node, neighbor, i), meaning "the cost of going from node to neighbor at time step i.
This is a degenerate version of a single-source shortest-path algorithm like Dijkstra's Algorithm, but it's much simpler because we know we must walk exactly N steps so we don't need to worry about getting stuck in negative-weight cycles, or longer paths with cheaper weights, or anything like that.

Find shortest path from s to every v with a limit on length

Let G(V,E) a directed graph with weights w:E->R. Let's assume |V| is dividable by 10. Let s in V. Describe an algorithm to find a shortest path from s to every v such that it contains exactly |V|/10 edges.
So at first I thought about using dynamic programming but I ended up with complexity of O(|V|^3) which apparently not optimal for this exercise.
We can't use Bellman-Ford (As far as I understand) since there might be negative cycles.
What could be an optimal algorithm for this problem?
Thanks
EDIT
I forgot to mention a crucial information; Path can be not-simple. i.e. we may repeat edges in our path.
You can perform a depth limited search with limit of |V|/10 on your graph. It will help you find the path with the least cost.
limit = v_size / 10
best[v_size] initialize to MAX_INT
depth_limited(node, length, cost)
if length == limit
if cost < best[node]
best[node] = cost
return
for each u connected to node
depth_limited(u, length+1, cost + w[node][u])
depth_limited(start_node, 0, 0)
According to me Bellman Ford's algorithm SHOULD be applicable here with a slight modification.
After iteration k, the distances to each node u_j(k) would be the shortest distance from source s having exactly k edges.
Initialize u_j(0) = infinity for all u_j, and 0 for s. Then recurrence relation would be,
u_j(k) = min {u_p(k-1) + w_{pj} | There is an edge from u_p to u_j}
Note that in this case u_j(k) may be greater than u_j(k-1).
The complexity of the above algorithm shall be O(|E|.|V|/10) = O(|E|.|V|).
Also, the negative cycles won't matter in this case because we shall stop after |V|/10 iterations. Whether cost can be further decreased by adding more edges is irrelevant.

Algorithm for finding the path that minimizes the maximum weight between two nodes

I would like to travel by car from city X to city Y. My car has a small tank, and gas stations exist only at intersections of the roads (the intersections are nodes and the roads are edges). Therefore, I would like to take a path such that the maximum distance that I drive between two gas stations is minimized. What efficient algorithm can I use to find that path? Brute force is one bad solution. I am wondering if there exists a more efficient algorithm.
Here is a simple solution:
Sort the edges by their weights.
Start adding them one by one(from the lightest to the heaviest) until X and Y become connected.
To check if they are connected, you can use a union-find data structure.
The time complexity is O(E log E).
A proof of correctness:
The correct answer is not larger than the one returned by this solution. It is the case because the solution is constructive: once X and Y are in the same component, we can explicitly write down the path between them. It cannot contain heavier edges because they haven't been added yet.
The correct answer is not smaller than the one returned by this solution. Let's assume that there is a path between X and Y that consists of edges which have weight strictly less than the returned answer. But is not possible as all lighter edges were processed before(we iterate over them in the sorted order) and X and Y were in different components. Thus, there was no path between them.
1) and 2) imply the correctness of this algorithm.
This solution works for undirected graphs.
Here is an algorithms which solves the problem for a directed case(it works for undirected graphs, too):
Let's sort the edges by their weights.
Let's binary search over the weight of the heaviest edge in the path(it is determined by an index of the edge in the sorted list of all edges).
For a fixed answer candidate i, we can do the following:
Add all edges with indices up to i in the sorted list(that is, all edges which are not heavier than the current one).
Run DFS or BFS to check that there is a path from X to Y.
Adjust left and right borders in the binary search depending on the existence of such path.
The time complexity is O((E + V) * log E)(we run DFS/BFS log E times and each of them is done in O(E + V) time).
Here is a pseudo code:
if (X == Y)
return 0 // We don't need any edges.
if (Y is not reachable from X using all edges)
return -1 // No solution.
edges = a list of edges sorted by their weight in increasing order
low = -1 // definitely to small(no edges)
high = edges.length - 1 // definitely big enough(all edges)
while (high - low > 1)
mid = low + (high - low) / 2
g = empty graph
for i = 0...mid
g.add(edges[i])
if (g.hasPath(X, Y)) // Checks that there is a path using DFS or BFS
high = mid
else
low = mid
return edges[high]

Shortest Path Algorithm with Fuel Constraint and Variable Refueling

Suppose you have an undirected weighted graph. You want to find the shortest path from the source to the target node while starting with some initial "fuel". The weight of each edge is equal to the amount of "fuel" that you lose going across the edge. At each node, you can have a predetermined amount of fuel added to your fuel count - this value can be 0. A node can be visited more than once, but the fuel will only be added the first time you arrive at the node. **All nodes can have different amounts of fuel to provide.
This problem could be related to a train travelling from town A to town B. Even though the two are directly connected by a simple track, there is a shortage of coal, so the train does not have enough fuel to make the trip. Instead, it must make the much shorter trip from town A to town C which is known to have enough fuel to cover the trip back to A and then onward to B. So, the shortest path would be the distance from A to C plus the distance from C to A plus the distance from A to B. We assume that fuel cost and distance is equivalent.
I have seen an example where the nodes always fill the "tank" up to its maximum capacity, but I haven't seen an algorithm that handles different amounts of refueling at different nodes. What is an efficient algorithm to tackle this problem?
Unfortunately this problem is NP-hard. Given an instance of traveling salesman path from s to t with decision threshold d (Is there an st-path visiting all vertices of length at most d?), make an instance of this problem as follows. Add a new destination vertex connected to t by a very long edge. Give starting fuel d. Set the length of the new edge and the fuel at each vertex other than the destination so that (1) the total fuel at all vertices is equal to the length of the new edge (2) it is not possible to use the new edge without collecting all of the fuel. It is possible to reach the destination if and only if there is a short traveling salesman path.
Accordingly, algorithms for this problem will resemble those for TSP. Preprocess by constructing a complete graph on the source, target, and vertices with nonzero fuel. The length of each edge is equal to the distance.
If there are sufficiently few special vertices, then exponential-time (O(2^n poly(n))) dynamic programming is possible. For each pair consisting of a subset of vertices (in order of nondecreasing size) and a vertex in that subset, determine the cheapest way to visit all of the subset and end at the specified vertex. Do this efficiently by using the precomputed results for the subset minus the vertex and each possible last waypoint. There's an optimization that prunes the subsolutions that are worse than a known solution, which may help if it's not necessary to use very many waypoints.
Otherwise, the play may be integer programming. Here's one formulation, quite probably improvable. Let x(i, e) be a variable that is 1 if directed edge e is taken as the ith step (counting from the zeroth) else 0. Let f(v) be the fuel available at vertex v. Let y(i) be a variable that is the fuel in hand after i steps. Assume that the total number of steps is T.
minimize sum_i sum_{edges e} cost(e) x(i, e)
subject to
for each i, for each vertex v,
sum_{edges e with head v} x(i, e) - sum_{edges e with tail v} x(i + 1, e) =
-1 if i = 0 and v is the source
1 if i + 1 = T and v is the target
0 otherwise
for each vertex v, sum_i sum_{edges e with head v} x(i, e) <= 1
for each vertex v, sum_i sum_{edges e with tail v} x(i, e) <= 1
y(0) <= initial fuel
for each i,
y(i) >= sum_{edges e} cost(e) x(i, e)
for each i, for each vertex v,
y(i + 1) <= y(i) + sum_{edges e} (-cost(e) + f(head of e)) x(i, e)
for each i, y(i) >= 0
for each edge e, x(e) in {0, 1}
There is no efficient algorithm for this problem. If you take an existing graph G of size n you can give each edge a weight of 1, each node a deposit of 5, and then add a new node that you are trying to travel to connected to each node with a weight of 4 * (n -1). Now the existence of a path from the source to the target node in this graph is equivalent to the existence of a Hamiltonian path in G, which is a known np-complete problem. (See http://en.wikipedia.org/wiki/Hamiltonian_path for details.)
That said, you can do better than a naive recursive solution for most graphs. First do a breadth first search from the target node so that every node's distance to the target is known. Now you can borrow the main idea of Dijkstra's A* search. Do a search of all paths from the source, using a priority queue to always try to grow a path whose current distance + the minimum to the target is at a minimum. And to reduce work you probably also want to discard all paths that have returned to a node that they have previously visited, except with lower fuel. (This will avoid silly paths that travel around loops back and forth as fuel runs out.)
Assuming the fuel consumption as positive weight and the option to add the fuel as negative weight and additionally treating the initial fuel available as another negative weighted edge, you can use Bellman Ford to solve it as single source shortest path.
As per this answer, elsewhere, undirected graphs can be addressed by replacing each edge with two in both directions. The only constraint I'm not sure about is the part where you can only refuel once. This may be naturally addressed, by the the algorithm, but I'm not sure.

Resources