Multiple Source -Multiple Destinations-No Collisions - algorithm

Lets say I have a set of destinations and another corresponding set of origins. I need to link each destination with one origin.A set of vehicles start from each origin towards their respective destination.The speed of every vehicle is provided.
In the network no two vehicles moving in opposite directions are allowed to move on a particular road at any instance of time,in brief there should not be any collisions on a road,if such a situation arrives,any one of the two vehicles which can collide on the road can wait till the other vehicle is passed or take some another path to reach its destination.
The graph can be thought of a road network where each edge in the graph represents the road and vertices in the graph can be thought of intersection of the edges.
The aim is to calculate the minimum time required for each vehicle to reach its destination and also the path taken by each vehicle to reach its destination satisfying all the above constraints.
Ideas on a way to tackle that?

This is NP-hard.
The problem of deciding whether all cars can complete their trips in at most some given number k of time units is NP-hard, even under any combination of the following simultaneous restrictions: all cars travel at unit speed, every edge has length 1, k = 3. A problem being NP-hard means there's almost certainly no polynomial-time algorithm that solves every instance. To show this I'll give a reduction from the NP-hard problem 3SAT: In this problem, we are given a Boolean expression in the form of a conjunction (AND) of n clauses, each of which is a disjunction (OR) of 3 literals, each of which is either a variable or its negation (NOT). There are m variables overall, each of which we can assign to either TRUE or FALSE; our task is to determine whether the overall expression is satisfiable -- that is, whether there exists any assignment of TRUE or FALSE values to the m variables that causes the overall expression to be TRUE.
Constructing an instance of your problem from an arbitrary 3SAT instance
Suppose we have an instance of 3SAT with n clauses and m variables. We can construct an instance of your problem in which each variable becomes an edge, with the direction of traffic (left-to-right or right-to-left) along that edge corresponding to the value (TRUE or FALSE) of the variable. Each clause becomes a gadget that connects to both ends of 3 of these variable-edges. Intuitively, each clause-gadget gives a vehicle starting at its start point (think of this as being on the left) one of 3 options to reach its corresponding end point (think of this as being on the right). Specifically:
First, delete any clause that contains both a variable and its negation as literals. This serves to remove length-2 trips from the graph without affecting satisfiability of the original expression, since such a clause is satisfiable by any assignment.
For each variable x_i, create two vertices u_i and v_i, and the edge (u_i, v_i). All edges in this construction have weight (distance?) 1.
For all 1 <= j <= n, build a gadget corresponding to the jth clause as follows: Create a vertex s_j and a vertex t_j. Let the literals in the jth clause be a, b and c. Let x_i be the variable in literal a. If a is a positive literal, create the edges (s_j, u_i) and (v_i, t_j), otherwise (i.e., if it equals "NOT x_i") create the edges (s_j, v_i) and (u_i, t_j). Do likewise for literals b and c.
Finally, add (s_j, t_j) as a (source, destination) pair for each 1 <= j <= n. Give each such car unit speed.
I now claim that the original 3SAT instance is satisfiable if and only if the just-constructed instance of your problem has a solution with duration at most 3.
YES to 3SAT instance => YES to instance of your problem
First I'll show that if the 3SAT instance is satisfiable, then there exists a solution to the just-constructed instance of your problem with duration 3. In this case we can assume that a satisfying assignment Y exists, so for every 1 <= i <= m let y_i be the assignment to variable x_i in some such satisfying assignment. Now in the just-constructed instance, orient every edge incident on some s_j away from s_j, every edge incident on some t_j toward t_j, and each variable-edge as follows: If y_i = TRUE, then orient the edge (u_i, v_i) from u_i to v_i, while OTOH if y_i = FALSE, orient the edge from v_i to u_i. Since by assumption Y is a satisfying assignment, we know that every clause contains at least 1 literal that evaluates to TRUE: that is, in each clause there is at least 1 literal z containing a variable x_i such that either z is positive and x_i = TRUE, or z is negative and x_i = FALSE. This implies that, for every clause j, there is at least 1 path from s_j to t_j that agrees with the orientation of edges established above. Clearly, if cars only ever travel across edges in the direction given by the orientation above, there can never be two cars crossing an edge in opposite directions, so such car trips do not interfere with each other. Since these paths from s_j to t_j all have length 3 and no trips interfere with each other, all trips can be simultaneously completed in 3 time steps.
YES to instance of your problem => YES to 3SAT instance
Now I'll show that if the solution to the just-constructed instance of your problem has duration at most 3, then there exists a satisfying assignment to the original 3SAT instance. Assume that there is such a solution to the just-constructed instance: then clearly, every trip must be completed in at most 3 time units. For a car to get from s_j to t_j, it must use at least 1 of the 3 edges incident on s_j, and at least 1 of the 3 edges incident on t_j, so it must take at least 2 time units; furthermore, because we deleted any clauses containing both a variable and its negation, no vertex is adjacent to both an s_j and t_j for any j, so at least one more edge is required, meaning 3 time units is the shortest path we could hope for (since every edge takes 1 time unit). So every trip in the solution must take exactly 3 time units, along a 3-edge path that experiences no hold-ups due to cars coming the other way. Notice that the middle leg on such a path must be a single variable-edge, since the only other ways of getting from some u_i to v_i or vice versa involve "doubling back" via at least 2 more edges. In particular, for the trip starting at s_j, it must be one of the 3 distinct variable-edges corresponding to the 3 literals in clause j. Specifically, let the literals in the jth clause be a, b and c. Let x_i be the variable in literal a. If a is positive, then "from u_i to v_i" is one of the 3 permitted legs for the trip starting at s_j, otherwise (i.e., if a is negative) "from v_i to u_i" is. Likewise for the remaining literals b and c. So, thus far, we have established that:
For each 1 <= j <= n, a car can travel from s_j to t_j in at most 3 time units using one of the 3 middle legs corresponding to the literals in clause j.
We build a solution to the 3SAT instance as follows: For each variable x_i, if the edge (u_i, v_i) is crossed by one or more cars from u_i to v_i, assign TRUE to y_i; if it is crossed by one or more cars from v_i to u_i, assign FALSE to y_i; otherwise (if the edge is not crossed at all), arbitrarily assign either value to y_i. We need to show two things: that no variable is assigned both TRUE and FALSE, and that the assignment causes the expression to take the value TRUE.
First, the only condition under which a variable x_i could be assigned both TRUE and FALSE by the above rule is if at least one car traverses the edge (u_i, v_i) in each direction. Suppose towards contradiction that this is true: some variable-edge (u_i, v_i) is crossed in opposite directions by 2 different car trips in the solution. Then clearly at least one of the two cars must pause for at least 1 time step to let the other one through this edge. But then the solution would need at least 4 time steps, contradicting our assumption of a solution of duration at most 3 time steps, thus it must be that if any cars cross edge (u_i, v_i), then they all do so in the same direction, and thus each variable is assigned at most one of TRUE or FALSE.
Second, for each 1 <= j <= n, we can reinterpret the jth clause of the 3SAT instance as "A car can travel from s_j to t_j in at most 3 time units using one of the middle legs corresponding to the literals in clause j", where "corresponding" is used in the same sense as earlier. Observe that under this interpretation, the 3SAT instance is (a) equivalent to the statement in the bullet point above, which we have already established to be TRUE, and (b) still formally equivalent to the original 3SAT problem (since all we have done is given an interpretation to its variables and clauses).
It follows that the variable assignment for the 3SAT problem that we just built from the solution to the instance of your problem is free of contradictions and produces the value TRUE: i.e., the 3SAT formula is satisfiable.
Wrapping up
We have now established that a YES answer to the question "Does there exist a satisfying assignment for this 3SAT expression?" implies a YES answer to the question "Does there exist a way of getting all cars from their starting points to their destinations in 3 time steps or less?", and also that a YES answer to the latter question implies a YES answer to the former. Thus a NO answer to either question also implies a NO answer to the other: that is, the questions are equivalent. We constructed the instance of your problem in polynomial time from the given 3SAT instance, so if there was some algorithm that could solve your problem in polynomial time, it could also be used to solve any 3SAT instance in polynomial time -- by first constructing such an instance of your problem, calling this algorithm to solve that instance as a subroutine, and then returning the answer. Thus your problem is at least as hard as 3SAT, namely NP-hard.

Related

Algorithm for detecting the minimum count of edges to break all loops

Is there an algorithm that takes a directed graph as input, and as output gives the minimum number of edges to "break" in order to prevent loops?
As an example, the loops in the above graph are:
A, B, C, D
A, B, E
E, G, H, F
And the minimum number of edges to break all of them:
A - B, breaks 2 loops
E - G, breaks 1 loop
It gets more complicated when the loops are nested inside each other and share edges.
My current approach is to find all the loops, group by most common edge, order desc, and break them in that order whilst the loops are still unbroken from the previous iterations.
I have tried a few methods and they all vary by the count of edges they break - I'm looking for the minimum theoretically possible.
Is there an established algorithm that does this?
This is the NP-hard Minimum Feedback Arc Set problem. Wikipedia doesn't point at an exact algorithm for small- to medium-size graphs, so let me suggest one.
Using an integer programming solver library such as OR-Tools, we formulate an integer program with a 0-1 variable for each arc, where 0 means retain the arc and 1 means delete the arc. The objective is to minimize the sum of these variables.
Now, we need to specify constraints, but in general there can be an exponential number of cycles, which would quickly overwhelm the solver as the graph grows. Instead, do the following:
Solve the integer program (initially, with no constraints).
Use breadth-first search to find a shortest cycle in the retained edges, if there is one.
If there is a cycle, add a constraint that the sum of the corresponding variables must be greater than or equal to one, then go back to Step 1. Otherwise, the current solution is optimal.

Finding a "positive cycle"

Suppose we E is an nxn matrix where E[i,j] represents the exchange rate from currency i to currency j. (How much of currency j can be obtained with 1 of currency i). (Note, E[i,j]*E[j,i] is not necessarily 1).
Come up with an algorithm to find a positive cycle if one exists, where a positive cycle is defined by: if you start with 1 of currency i, you can keep exchanging currency such that eventually you can come back and have more than 1 of currency i.
I've been thinking about this problem for a long time, but I can't seem to get it. The only thing I can come up with is to represent everything as a directed graph with matrix E as an adjacency matrix, where log(E[i,j]) is the weight between vertices i and j. And then you would look for a cycle with a negative path or something. Does that even make sense? Is there a more efficient/easier way to think of this problem?
First, take logs of exchange rates (this is not strictly necessary, but it means we can talk about "adding lengths" as usual). Then you can apply a slight modification of the Floyd-Warshall algorithm to find the length of a possibly non-simple path (i.e. a path that may loop back on itself several times, and in different places) between every pair of vertices that is at least as long as the longest simple path between them. The only change needed is to flip the sign of the comparison, so that we always look for the longest path (more details below). Finally you can look through all O(n^2) pairs of vertices u and v, taking the sum of the lengths of the 2 paths in each direction (from u to v, and from v to u). If any of these are > 0 then you have found a (possibly non-simple) cycle having overall exchange rate > 1. Overall the FW part of the algorithm dominates, making this O(n^3)-time.
In general, the problem with trying to use an algorithm like FW to find maximum-weight paths is that it might join together 2 subpaths that share one or more vertices, and we usually don't want this. (This can't ever happen when looking for minimum-length paths in a graph with no negative cycles, since such a path would necessarily contain a positive-weight cycle that could be removed, so it would never be chosen as optimal.) This would be a problem if we were looking for the maximum-weight simple cycle; in that case, to get around this we would need to consider a separate subproblem for every subset of vertices, which pushes the time and space complexity up to O(2^n). Fortunately, however, we are only concerned with finding some positive-weight cycle, and it's reasonably easy to see that if the path found by FW happens to use some vertex more than once, then it must contain a nonnegative-weight cycle -- which can either be removed (if it has weight 0), or (if it has weight > 0) is itself a "right answer"!
If you care about finding a simple cycle, this is easy to do in a final step that is linear in the length of the path reported by FW (which, by the way, may be O(2^|V|) -- if all paths have positive length then all "optimal" lengths will double with each outermost iteration -- but that's pretty unlikely to happen here). Take the optimal path pair implied by the result of FW (each path can be calculated in the usual way, by keeping a table of "optimal predecessor" values of k for each vertex pair (i, j)), and simply walk along it, assigning to each vertex you visit the running total of the length so far, until you hit a vertex that you have already visited. At this point, either currentTotal - totalAtAlreadyVisitedVertex > 0, in which case the cycle you just found has positive weight and you're finished, or this difference is 0, in which case you can delete the edges corresponding to this cycle from the path and continue as usual.

Algorithms - Shortest Path with Contingent Costs

I am looking for an efficient algorithm to solve the following problem :
Given a directed, weighted graph G = (V,E), a source vertex S, a destination vertex T, and M, a subset of V, It's required to find the shortest path from S to T.
A special feature present in M is that each vertex in M, once 'visited', the weight of a certain edge changes to another value. Both the edge and the new weight will be given for each vertex in M.
To help in understanding the problem, I have drawn an example using mspaint. (sorry for the quality).
In this example, the 'regular' shortest path from S to T is 1000.
However, visiting the vertex C will reduce the edge weight from 1000 to just 500, so the shortest path in this case is 200+100+500=800.
This problem is NP-hard and it is clearly in NP. The proof is a relatively straightforward gadget reduction.
This more or less rules out making significant improvements on the trivial, brute force algorithm for this problem. So what exactly are you hoping for when you say "efficient" here?
===
Proof
It might be that the problem statement has been unclear somehow so the version OP cares about isn't actually NP-complete. So I'll give some details of the proof.
For technical reasons, usually when we want to show a search problem is NP-hard we actually do it for an associated decision problem which is interreducible to the search problem. The decision problem here is "Given a directed weighted graph as described with associated edge-weight changing data, and a numeric value V, does the shortest path have value at most V?". Clearly, if you have an algorithm for the search problem, you can solve the decision problem easily, and if you have an algorithm for the decision problem, you can use it for the search problem -- you could use binary search essentially to determine the optimal value of V to precision greater than the input numbers, then alter the problem instance by deleting edges and checking if the optimal solution value changed in order to determine if an edge is in the path. So in the sequel I talk about the decision version of the problem.
The problem is in NP
First to see that it is in NP, we want to see that "yes" instances of the decision problem are certifiable in polynomial time. The certificate here is simply the shortest path. It is easy to see that the shortest path does not take more bits to describe than the graph itself. It is also easy to calculate the value of any particular path, you just go through the steps of the path and check what the value of the next edge was at that time. So the problem is in NP.
The problem is NP-hard
To see that it is NP-hard we reduce from 3SAT to the decision problem. That is the problem of determining the satisfiability of a boolean formula in CNF form in which each clause has at most 3 literals. For a complete definition of 3SAT see here: https://en.wikipedia.org/wiki/Boolean_satisfiability_problem
The reduction which I will describe is a transformation which takes an instance of 3SAT and produces an input to the decision problem, with the property that the 3SAT instance is satisfiable if and only if the shortest path value is less than the specified threshold.
For any given 3SAT formula, the graph which we will produce has the following high-level structure. For each variable, there will be a "cloud" of vertices associated to it which are connected in certain ways, and some of those vertices are in M. The graph is arranged so that the shortest path must pass through each cloud exactly once, passing through the cloud for x1 first, then the cloud for x2, and so on. Then, there is also a (differently arranged) cloud for each clause of the formula. After passing through the last variable's cloud, the path must pass through the clouds of the clauses in succession. Then it reaches the terminal.
The basic idea is that, when going through the cloud for variable xi, there are exactly two different possible paths, and the choice represents a commitment to a truth value of xi. All of the costs of the edges for the variable clouds are the same, so it doesn't directly affect the path value. But, since some of them are in M, the choice of path there changes what the costs will later in the clause clouds. The clause clouds enforce that if the variables we picked don't satisfy the clause, then we pay a price.
The variable cloud looks like this:
*_*_*_*
/ \
Entry * * Exit
\ /
*_*_*_*
Where, the stars are vertices, and the lines are edges, and all the edges are directed to the right, and the have same cost, we can take it to be zero, or if thats a problem they could all be one, it doesn't really matter. Here I showed 4 vertices on the two paths, but actually the number of vertices will depend on some things, as we will see.
The clause cloud looks like this:
*
/ \
Entry *_*_* Exit
\ /
*
Where, again all edges are directed to the right.
Each of the 3 central vertices is "labelled" (in our minds) and corresponds to one of the three literals in the clause. All of these edges again have cost 0.
The idea is that when I go through the variable cloud, and pick a value for that variable, if I didn't satisfy the literal of some clause, then the cost of the corresponding edge in the clause cloud should go up. Thus, as long as at least I actually satisfied the clause I have a path from the entry to the exit which costs 0. And if every one of the literals was missed by this assignment, then I have to pay something larger than zero. Maybe, 100 or something, it doesn't really matter.
Going back to the variable cloud now, the variable cloud for xi has 2m vertices in the middle part where m is the number of clauses that xi appears in. Then, depending whether it appears positively or negatively in the k'th such clause, the k'th vertex of the top or the bottom path is in M and changes the edge in the corresponding clause cloud, to have cost 100 or whatever other fixed value.
The overall graph is made by simply pasting together the variable and clause clouds at their entry - exit nodes in succession. The threshold value is, say, 50.
The point is that, if there is a satisfying assignment to the 3SAT instance, then we can construct from it a path through the graph instance of cost 0, since we never pay anything in the vertex clouds, and we always can pick a path in each clause cloud where the clause was satsified and we don't pay anything there either. If there is no satisfying assignment to the 3SAT instance, then any path must get a clause wrong at some point and then pay at least 100. So if we set the threshold to 50 and make that part of the instance, the reduction is sound.
If the problem statement doesn't actually allow 0 cost edges, we can easily change the solution so that it only has positive cost edges. Because, the total number of edges in any path from start to finish is the same for every path. So if we add 1 to every edge cost, and take the threshold to be 50 + length instead, nothing is changed.
That same trick of adding a fixed value to all of the edges and adjusting the threshold can be used to see that the problem is very strongly inapproximable also, as pointed out by David Eisenstat in comments.
Running time implications
If you are economical in how many vertices you assign to each variable cloud, the reduction takes a 3SAT instance with n variables (and hence input length O(n)) also to a graph instance of O(n) vertices, and the graph is sparse. (100n vertices should be way more than sufficient.) As a result, if you can give an algorithm for the stated problem with running time less than 2^{o(n)} on sparse graphs with n vertices, it implies a 2^{o(n)} algorithm for 3SAT which would be a major breakthrough in algorithms, and would disprove the "Exponential Time Hypothesis" of Impagliazzo and Paturi. So you can't really hope for more than a constant-factor-in-the-exponent improvement over the trivial algorithm in this problem.

flow network proof with cuts in it

I’m currently studying flow networks in the university, and my professor presented this theorem to us:
“Given a flow network, and a flow B in it, so that for each vertex, except the source and the sink: |∑(e:u→v) of B(e) - ∑(e':v→u) of B(e')|≤ε.
Note: this equation is for every v (vertex who is not the source or
the sink in the network). e:u→v means that I want the Sum of B(e)'s of
every edge who is, in a cutset, from the set of u to the set of v. and
then, e':v→u means that I want the Sum of B(e)'s of every edge who is,
in the same cutset, from the set of v to the set of u.
There exists a new flow, F, that for every edge in the graph, |F(e)-B(e)|<ε*N (where N is the number of vertexes in the graph).”
He claimed that a proof exists, but I can’t get to the bottom of it. I was thinking about the fact that Epsilon’s lower bound is the min cut of the graph, but all the other ideas I had we’re useless. I’d appreciate any help. I searched for the proof on the web but couldn’t find anything.
Thanks in advance,
Or
Given an antisymmetric assignment of quantities to edges, the excess of a vertex is the total quantity entering minus the total quantity exiting. For each vertex v with a negative excess -c, choose a path from the source s to v, multiply it by c, and add it to the assignment. For each vertex v with a positive excess c, choose a path from v to the sink t, multiply it by c, and add it to the assignment. It's straightforward to check that (1) all of the excesses are now zero, except at s and t (2) since every excess was less than epsilon in absolute value, the worst-case change for an edge is if it's involved in every path, for a total of less than epsilon times n, the number of vertices.

maximum distribution with weighed edges

Imagine a graph where each vertex has a value (example, number of stones) and is connected through edges, that represents the cost of traversing that edge in stones. I want to find the largest possible amount of stones, such that each vertex Vn >= this value. Vertices can can exchange stones to others, but the value exchanged gets subtracted by the distance, or weight of the edges connecting them
I need to solve this as a greedy algorithm and in O(n) complexity, where n is the amount of vertices, but I have problems identifying the subproblems/greedy choice. I was hoping that someone could provide a stepping stone or some hints on how to accomplish this, much appreciated
Summary of Question
I am not sure I have understood the question correctly, so first I will summarize my understanding.
We have a graph with vertices v1,v2,..,vn and weighted edges. Let the weight between vi and vj be W[i,j]
Each vertex starts with a number of stones, let us call the number of stones on vertex vi equal to A[i]
You wish to perform multiple transfers in order to maximise the value of min(A[i] for i = 1..n)
x stones can be transferred between vi and vj if x>W[i,j], this operation transforms the values as:
A[i] -= x
A[j] += x-W[i,j] # Note fewer stones arrive than leave
Is this correct?
Response
I believe this problem is NP-hard because it can be used to solve 3-SAT, a known NP-complete problem.
For a 3-sat example with M clauses such as:
(A+B+!C).(B+C+D)
Construct a directed graph which has a node for each clause (with no stones), a node for each variable with 3M+1 stones, and two auxiliary nodes for each variable with 1 stone (one represents the variable having a positive value, and one represents the variable having a negative value.
Then connect the nodes as shown below:
This graph will have a solution with all vertices having value >= 1, if and only if the 3-sat is soluble.
The point is that each red node (e.g. for variable A) can only send stones to either A=1 or A=0, but not both. If we provide stones to the green node A=1, then this node can supply stones to all of the blue clauses which use that variable in a positive sense.
(Your original question does not involve a directed graph, but I doubt that this additional change will make a material difference to the complexity of the problem.)
Summary
I am afraid it is going to be very hard to get an O(n) solution to this problem.

Resources