l - adiacency list
x - starting vertex
dfst, q - empty array of vertex size
std::list <int> q;
std::vector<bool> visited(cols + 1);
for(int i = 0; i < cols; i++) visited[i] = false;
visited[x] = true;
if(!l[x].empty())
for(std::list<int>::iterator i = l[x].begin(); i != l[x].end(); i++)
{
q.push_back(x); q.push_back(* i);
}
while(!q.empty())
{
y = q.back(); q.pop_back();
x = q.back(); q.pop_back();
if(!visited[y])
{
visited[y] = true;
if(!l[y].empty())
for(std::list<int>::iterator i = l[y].begin(); i != l[y].end(); i++)
{
q.push_back(y); q.push_back(* i);
}
dfst[x].push_back(y);
dfst[y].push_back(x);
}
}
I just can't see why does this give wrong results...I don't know if you are familiar with this algorithm, but if you are, I hope you can see what's wrong here.
EDIT:
Adjacency list is:
1: 2, 3
2: 3
3: 4
4: 3
MST should here be something like:
1: 2, 3
3: 4
But instead it's:
2: 3
3: 2, 4
4: 3
And the current code is: (I used brackets where it was needed):
std::list <int> q;
std::vector<bool> visited(cols + 1);
for(int i = 0; i < cols; i++) visited[i] = false;
visited[x] = true;
if(!l[x].empty())
{
for(std::list<int>::iterator i = l[x].begin(); i != l[x].end(); i++)
{
q.push_back(x); q.push_back(* i);
}
while(!q.empty())
{
y = q.back(); q.pop_back();
x = q.back(); q.pop_back();
if(!visited[y])
{
visited[y] = true;
if(!l[y].empty())
for(std::list<int>::iterator i = l[y].begin(); i != l[y].end(); i++)
{
if(!visited[*i])
{q.push_back(y); q.push_back(* i);}
}
dfst[x].push_back(y);
dfst[y].push_back(x);
}
}
}
I worked through your code, it appears to be correct. However, keep in mind that there are multiple DFS trees if your input tree has a cycle. The tree that you get depends on the order in which you process your vertices. Perhaps your input has a cycle? If so, your code might be processing the nodes in an order different from the order that they are processed in the solution.
For example, take the following adjacency list for an input tree, where nodes are lettered:
a: b,c
b: a,d
c: a,c,e
d: b,c,e
e: c,d
Starting at a, if we look in the adjacency list for the next node based on alphabetical order, we get the following DFS tree:
a: b
b: a,d
c: d,e
d: b,d
e: c
Using your algorithm though, we get the following:
a: c
b: d
c: a,e
d: b,e
e: c,d
If this is happening, perhaps try a recursive approach.
Seeing some sample input and output would help further with ansering this question though, as this may not be your problem.
EDIT: I should clarify that the multiple DFS trees in a graph with a cycle applies where the cycle is bidirectional, such as with an undirected graph. The point is, you might be finding a DFS tree that is also correct, but not the same as the one that has been identified as correct.
EDIT (again): Another issue you may have: your algorithm appears to be for undirected graphs since you have the satements dfst[x].push_back(y); dfst[y].push_back(x);in your code, but your input graph given in your example looks like it's directed. Removing the second statement (dfst[y].push_back(x);) should correct this
Code looks little confusing actually i thought its BFS first..but its DFS
You should check if the node is visited before adding to the stack(back of list)
while(!q.empty())
{
y = q.back(); q.pop_back();
x = q.back(); q.pop_back();
if(!visited[y])
{
visited[y] = true;
if(!l[y].empty())
for(std::list<int>::iterator i = l[y].begin(); i != l[y].end(); i++)
{
if(!visited[*i])
{q.push_back(y); q.push_back(* i);}
}
dfst[x].push_back(y);
dfst[y].push_back(x);
}
}
Related
So this question came to my mind when solving some dijikstra's based leetcode problems.
We have node, distance pairs in priority queue at each step.
Having duplicate nodes in the heap depends on one thing i.e when we mark the node as visited(i.e confirming that we have found the shortest length for it). If we mark it while pushing into the queue we will not have any duplicates, if we mark after popping from the queue, we may have duplicates in the queue.
https://leetcode.com/problems/network-delay-time/
In this question we can mark the node as visited only after popping from the priority Queue, or we will miss some edge that may lead to a shorter path.
Ex:
[[1,2,1],[2,3,2],[1,3,4]]
3
1
If we add while inserting we will get wrong answer while exploring 1's neighbors what we are doing is ,
1->2 queue={2,1} visited={1,2}
1->3 queue{(2,1), (3,4)}
since all nodes are now visited, we will never encounter the path 1->2->3 distance=1+2=3.
But in other questions we can do a dijikstra with visited marked before the insertion into the priority queue, ex:
https://leetcode.com/problems/swim-in-rising-water/
why is dijikstra with visited marked before the insertion valid here
BFS is for blindly visiting nodes (may assume all weight 1), Dijkstra will prioritize with the least weighted path.
When can we have duplicate nodes in the heap in Dijkstra algorithm?
a
4/ \2
/ \
b ---- c
| 1
4 |
d
1. here start Dijkstra from a. queue = (a, 0)
2. b, c pushed with path cost into p_queue. queue = (b, 4), (c, 2)
3. c popped. b with another cost pushed. queue = (b, 4), (b, 3) here the new (b, 3) has (ac + cb) cost
4. least b popped. d pushed queue = (b, 4), (d, 7)
5. As we check and mark after pop. b popped. queue = (d, 7)
6. But already visited so returned
7. Process d
But in other questions we can do a Dijkstra with visited marked before the insertion into the priority queue, ex: https://leetcode.com/problems/swim-in-rising-water
why is Dijkstra with visited marked before the insertion valid here?
Depends largely on the problem itself. As for this particular problem, we get weight directly from the node, and no matter whether we push it before or after, it will be popped at the same time, though keeping visited checking before push will prevent redundant pushing.
Here is my accepted implementation where you can comment out or keep either after pop or before push marking & checking for visited nodes and both will get accepted.
class Solution {
struct PosWeight {
pair<int, int> pos;
int weight;
PosWeight(pair<int, int> a, int weight): pos(a), weight(weight){}
};
int visited[51][51] = {0};
int traverse(vector<vector<int>>& grid) {
int row = size(grid);
int column = size(grid[0]);
vector<pair<int, int>> directions = { {0,1}, {1,0}, {-1,0}, {0,-1} };
auto isValidTo = [&grid, row, column]
(pair<int, int> direction, pair<int, int> pos) {
if (pos.first + direction.first >= row) return false;
if (pos.first + direction.first < 0) return false;
if (pos.second + direction.second >= column) return false;
if (pos.second + direction.second < 0) return false;
return true;
};
auto comp = [](PosWeight &a, PosWeight &b) {
return a.weight > b.weight;
};
int maxx =INT_MIN;
priority_queue<PosWeight, vector<PosWeight>, decltype(comp)> pq(comp);
pq.emplace(make_pair(0,0), grid[0][0]);
while(!pq.empty()) {
auto elem = pq.top();
pq.pop();
// You can comment out this portion and keep the later one before pushing in queue
if (visited[elem.pos.first][elem.pos.second]) continue;
visited[elem.pos.first][elem.pos.second] = 1;
// ---
maxx = max(maxx, elem.weight);
if (elem.pos.first == row - 1 && elem.pos.second == column - 1)
return maxx;
for(auto direction: directions)
if (isValidTo(direction, elem.pos)) {
pair<int,int> toGo = make_pair( direction.first + elem.pos.first,
direction.second + elem.pos.second );
auto weight = grid[toGo.first][toGo.second];
// You can comment out this portion and keep the later one before pushing in queue
// if (visited[toGo.first][toGo.second]) continue;
// visited[toGo.first][toGo.second] = 1;
// -----
pq.emplace(toGo, weight);
}
}
return maxx;
}
public:
int swimInWater(vector<vector<int>>& grid) {
return traverse(grid);
}
};
But, for https://leetcode.com/problems/network-delay-time this problem, the checking must be after pop as doing so before push will cause early all node visits instead of prioritized shortest as you stated in your question.
class Solution {
public:
int networkDelayTime(vector<vector<int>>& times, int n, int k) {
auto comp = [](pair<int,int> &a, pair<int,int> &b) {
return a.second > b.second;
};
priority_queue<pair<int,int>, vector<pair<int,int>>, decltype(comp)> que(comp);
vector<vector<int>> rel(n, vector<int>(n, -1));
for (auto &time: times)
rel[time[0] - 1][time[1] - 1] = time[2];
vector<int> visit(n, 0);
que.push({k-1, 0});
int sz = n;
while (size(que)) {
auto now = que.top();
que.pop();
if (visit[now.first]) continue;
visit[now.first] = 1;
if (!--sz) return now.second;
auto id = now.first, val = now.second;
for (int i = 0; i < n; ++i)
if (rel[id][i] != -1) {
// cant do checking here
que.push({i, val + rel[id][i]});
}
}
return -1;
}
};
So, bottom line, it depends on the nature and requirement of the problem.
I have some questions about Prim`s algorithm.
Prim algorithms can find MST. In general implementation, It needs initialize all Nodes as INF. but i don`t know why this initialize needs.
Here is my implementation
#include<iostream>
#include<tuple>
#include<algorithm>
#include<vector>
using namespace std;
typedef tuple<int,int,int> ti;
int main(void)
{
ios::sync_with_stdio(0);
cin.tie(0);
bool vis[1005];
vector<pair<int,int>> vertex[1005];
int V,E;
int u,v,w;
int sum = 0;
int cnt = 0;
priority_queue<ti,vector<ti>,greater<ti>> pq;
cin >> V >> E;
for(int i = 0; i < E; i++)
{
cin >> u >> v >> w;
vertex[u].push_back({v,w});
vertex[v].push_back({u,w});
}
for(auto i : vertex[1]){
pq.push({i.second,1,i.first});
}
vis[1] = true;
while(!pq.empty())
{
tie(w,u,v) = pq.top(); pq.pop();
if(vis[v]) continue;
vis[v] = true;
sum += w;
cnt++;
for(auto i : vertex[v]){
if(!vis[i.first])
pq.push({i.second,v,i.first});
}
if(cnt == V-1) break;
}
// VlogV
cout << sum;
return 0;
plz ignore indentation (code paste error)
In this code, you can find sum of the MST. O(VlogV), Also we can find some Vertex Parent node (vis[v] = true, pre[v] = u) so we can know order of MST.
When we don`t need distance array, we can implement prim algorithm O(VlogV), In almost case(not in MST case) it always faster than Kruskal.
I know I'm something wrong, so i want to know what point I am wrong.
So is there any reason why we use distance array??
Your conclusion that this algorithm works in O(Vlog(V)) seems to be wrong. Here is why:
while(!pq.empty()) // executed |V| times
{
tie(w,u,v) = pq.top();
pq.pop(); // log(|V|) for each pop operation
if(vis[v]) continue;
vis[v] = true;
sum += w;
cnt++;
for(auto i : vertex[v]){ // check the vertices of edge v - |E| times in total
if(!vis[i.first])
pq.push({i.second,v,i.first}); // log(|V|) for each push operation
}
if(cnt == V-1) break;
}
First of all notice that, you have to implement the while loop |V| times, since there are |V| number of vertices stored in the pq.
However, also notice that you have to traverse all the vertices in the line:
for(auto i : vertex[v])
Therefore it takes |E| number of operations in total.
Notice that push and pop operations takes |V| number of operations for each approximately.
So what do we have?
We have |V| many iterations and log(|V|) number of push/pop operations in each iteration, which makes V * log(V) number of operations.
On the other hand, we have |E| number of vertex iteration in total, and log(|V|) number of push operation in each vertex iteration, which makes E * log(V) number of operations.
In conclusion, we have V*log(V) + E*log(V) total number of operations. In most cases, V < E assuming a connected graph, therefore time complexity can be shown as O(E*log(V)).
So, time complexity of Prim's Algorithm doesn't depend on keeping a distance array. Still, you have to make the iterations mentioned above.
I want to model the following puzzle with a graph.
The barman gives you three
glasses whose sizes are 1000ml, 700ml, and 400ml, respectively. The 700ml and 400ml glasses start
out full of beer, but the 1000ml glass is initially empty. You can get unlimited free beer if you win
the following game:
Game rule: You can keep pouring beer from one glass into another, stopping only when the source
glass is empty or the destination glass is full. You win if there is a sequence of pourings that leaves
exactly 200ml in the 700ml or 400 ml glass.
I was a little unsure of how to translate this problem in a graph. My thought was that the glasses would be represented by nodes in a weighted, undirected graph where edges indicate that a glass u can be poured into a glass v and the other way is the same, therefore a walk would be a sequence of pourings that would lead to the correct solution.
However, this approach of having three single nodes and undirected edges doesn't quite work for Dijkstra's algorithm or other greedy algorithms which was what I was going to use to solve the problem. Would modeling the permutations of the pourings as a graph be more suitable?
You should store whole state as vertex. I mean, value in each glass is a component of state, hence state is array of glassesCount numbers. For example, initial state is (700,400,0).
After that you should add initial state to queue and run BFS. BFS is appliable because each edge has equal weight =1. Weight is equal because weight is a number of pourings between each state which is obviously = 1 as we generate only reachable states from each state in queue.
You may also use DFS, but BFS returns the shortest sequence of pourings because BFS gives shortest path for 1-weighted graphs. If you are not interested in shortest sequence of pourings but any solution, DFS is ok. I will describe BFS because it has the same complexity with DFS and returns better (shorter) solution.
In each state of BFS you have to generate all possible new states by pouring from all pairwise combinations. Also, you should check possibility of pouring.
For 3 glasses there are 3*(3-1)=6 possible branches from each state but I implemented more generic solution allowing you to use my code for N glasses.
public class Solution{
static HashSet<State> usedStates = new HashSet<State>();
static HashMap<State,State> prev = new HashMap<State, State>();
static ArrayDeque<State> queue = new ArrayDeque<State>();
static short[] limits = new short[]{700,400,1000};
public static void main(String[] args){
State initialState = new State(new Short[]{700,400,0});
usedStates.add(initialState);
queue.add(initialState);
prev.put(initialState,null);
boolean solutionFound = false;
while(!queue.isEmpty()){
State curState = queue.poll();
if(curState.isWinning()){
printSolution(curState);
solutionFound = true;
break; //stop BFS even if queue is not empty because solution already found
}
// go to all possible states
for(int i=0;i<curState.getGlasses().length;i++)
for(int j=0;j<curState.getGlasses().length;j++) {
if (i != j) { //pouring from i-th glass to j-th glass, can't pour to itself
short glassI = curState.getGlasses()[i];
short glassJ = curState.getGlasses()[j];
short possibleToPour = (short)(limits[j]-glassJ);
short amountToPour;
if(glassI<possibleToPour) amountToPour = glassI; //pour total i-th glass
else amountToPour = possibleToPour; //pour i-th glass partially
if(glassI!=0){ //prepare new state
Short[] newGlasses = Arrays.copyOf(curState.getGlasses(), curState.getGlasses().length);
newGlasses[i] = (short)(glassI-amountToPour);
newGlasses[j] = (short)(newGlasses[j]+amountToPour);
State newState = new State(newGlasses);
if(!usedStates.contains(newState)){ // if new state not handled before mark it as used and add to queue for future handling
usedStates.add(newState);
prev.put(newState, curState);
queue.add(newState);
}
}
}
}
}
if(!solutionFound) System.out.println("Solution does not exist");
}
private static void printSolution(State curState) {
System.out.println("below is 'reversed' solution. In order to get solution from initial state read states from the end");
while(curState!=null){
System.out.println("("+curState.getGlasses()[0]+","+curState.getGlasses()[1]+","+curState.getGlasses()[2]+")");
curState = prev.get(curState);
}
}
static class State{
private Short[] glasses;
public State(Short[] glasses){
this.glasses = glasses;
}
public boolean isWinning() {
return glasses[0]==200 || glasses[1]==200;
}
public Short[] getGlasses(){
return glasses;
}
#Override
public boolean equals(Object other){
return Arrays.equals(glasses,((State)other).getGlasses());
}
#Override
public int hashCode(){
return Arrays.hashCode(glasses);
}
}
}
Output:
below is 'reversed' solution. In order to get solution from initial
state read states from the end
(700,200,200)
(500,400,200)
(500,0,600)
(100,400,600)
(100,0,1000)
(700,0,400)
(700,400,0)
Interesting fact - this problem has no solution if replace
200ml in g1 OR g2
to
200ml in g1 AND g2
.
I mean, state (200,200,700) is unreachable from (700,400,0)
If we want to model this problem with a graph, each node should represent a possible assignment of beer volume to glasses. Suppose we represent each glass with an object like this:
{ volume: <current volume>, max: <maximum volume> }
Then the starting node is a list of three such objects:
[ { volume: 0, max: 1000 }, { volume: 700, max: 700 }, { volume: 400, max: 400 } ]
An edge represents the action of pouring one glass into another. To perform such an action, we pick a source glass and a target glass, then calculate how much we can pour from the source to the target:
function pour(indexA, indexB, glasses) { // Pour from A to B.
var a = glasses[indexA],
b = glasses[indexB],
delta = Math.min(a.volume, b.max - b.volume);
a.volume -= delta;
b.volume += delta;
}
From the starting node we try pouring from each glass to every other glass. Each of these actions results in a new assignment of beer volumes. We check each one to see if we have achieved the target volume of 200. If not, we push the assignment into a queue.
To find the shortest path from the starting node to a target node, we push newly discovered nodes onto the head of the queue and pop nodes off the end of the queue. This ensures that when we reach a target node, it is no farther from the starting node than any other node in the queue.
To make it possible to reconstruct the shortest path, we store the predecessor of each node in a dictionary. We can use the same dictionary to make sure that we don't explore a node more than once.
The following is a JavaScript implementation of this approach. Click on the blue button below to run it.
function pour(indexA, indexB, glasses) { // Pour from A to B.
var a = glasses[indexA],
b = glasses[indexB],
delta = Math.min(a.volume, b.max - b.volume);
a.volume -= delta;
b.volume += delta;
}
function glassesToKey(glasses) {
return JSON.stringify(glasses);
}
function keyToGlasses(key) {
return JSON.parse(key);
}
function print(s) {
s = s || '';
document.write(s + '<br />');
}
function displayKey(key) {
var glasses = keyToGlasses(key);
parts = glasses.map(function (glass) {
return glass.volume + '/' + glass.max;
});
print('volumes: ' + parts.join(', '));
}
var startGlasses = [ { volume: 0, max: 1000 },
{ volume: 700, max: 700 },
{ volume: 400, max: 400 } ];
var startKey = glassesToKey(startGlasses);
function solve(targetVolume) {
var actions = {},
queue = [ startKey ],
tail = 0;
while (tail < queue.length) {
var key = queue[tail++]; // Pop from tail.
for (var i = 0; i < startGlasses.length; ++i) { // Pick source.
for (var j = 0; j < startGlasses.length; ++j) { // Pick target.
if (i != j) {
var glasses = keyToGlasses(key);
pour(i, j, glasses);
var nextKey = glassesToKey(glasses);
if (actions[nextKey] !== undefined) {
continue;
}
actions[nextKey] = { key: key, source: i, target: j };
for (var k = 1; k < glasses.length; ++k) {
if (glasses[k].volume === targetVolume) { // Are we done?
var path = [ actions[nextKey] ];
while (key != startKey) { // Backtrack.
var action = actions[key];
path.push(action);
key = action.key;
}
path.reverse();
path.forEach(function (action) { // Display path.
displayKey(action.key);
print('pour from glass ' + (action.source + 1) +
' to glass ' + (action.target + 1));
print();
});
displayKey(nextKey);
return;
}
queue.push(nextKey);
}
}
}
}
}
}
solve(200);
body {
font-family: monospace;
}
I had the idea of demonstrating the elegance of constraint programming after the two independent brute force solutions above were given. It doesn't actually answer the OP's question, just solves the puzzle. Admittedly, I expected it to be shorter.
par int:N = 7; % only an alcoholic would try more than 7 moves
var 1..N: n; % the sequence of states is clearly at least length 1. ie the start state
int:X = 10; % capacities
int:Y = 7;
int:Z = 4;
int:T = Y + Z;
array[0..N] of var 0..X: x; % the amount of liquid in glass X the biggest
array[0..N] of var 0..Y: y;
array[0..N] of var 0..Z: z;
constraint x[0] = 0; % initial contents
constraint y[0] = 7;
constraint z[0] = 4;
% the total amount of liquid is the same as the initial amount at all times
constraint forall(i in 0..n)(x[i] + y[i] + z[i] = T);
% we get free unlimited beer if any of these glasses contains 2dl
constraint y[n] = 2 \/ z[n] = 2;
constraint forall(i in 0..n-1)(
% d is the amount we can pour from one glass to another: 6 ways to do it
let {var int: d = min(y[i], X-x[i])} in (x[i+1] = x[i] + d /\ y[i+1] = y[i] - d) \/ % y to x
let {var int: d = min(z[i], X-x[i])} in (x[i+1] = x[i] + d /\ z[i+1] = z[i] - d) \/ % z to x
let {var int: d = min(x[i], Y-y[i])} in (y[i+1] = y[i] + d /\ x[i+1] = x[i] - d) \/ % x to y
let {var int: d = min(z[i], Y-y[i])} in (y[i+1] = y[i] + d /\ z[i+1] = z[i] - d) \/ % z to y
let {var int: d = min(y[i], Z-z[i])} in (z[i+1] = z[i] + d /\ y[i+1] = y[i] - d) \/ % y to z
let {var int: d = min(x[i], Z-z[i])} in (z[i+1] = z[i] + d /\ x[i+1] = x[i] - d) % x to z
);
solve minimize n;
output[show(n), "\n\n", show(x), "\n", show(y), "\n", show(z)];
and the output is
[0, 4, 10, 6, 6, 2, 2]
[7, 7, 1, 1, 5, 5, 7]
[4, 0, 0, 4, 0, 4, 2]
which luckily coincides with the other solutions. Feed it to the MiniZinc solver and wait...and wait. No loops, no BFS and DFS.
eg. for 1->2, 2->3, 3->4, 4->2, I want to print 2, 3, 4.
I tried DFS, and when I found vertex I visited before, I go to parent until I don't get this vertex, but it does not work well. Sometimes it enters an infinite loop.
Run dfs:
int i;
for (i = 0; i < MAX_VER; i += 1)
if (ver[i].v == 0 && ver[i].nb > 0)
dfs(i);
dfs:
ver[v].v = 1;
int i;
for (i = 0; i < ver[v].nb; i += 1) {
ver[ver[v].to[i]].p = v;
if (ver[ver[v].to[i]].v == 0)
dfs(ver[v].to[i]);
else
// cycle found
printCycle(ver[v].to[i]);
}
and print cycle:
printf("\cycle: %d ", v);
int p = ver[v].p;
while (p != v) {
printf("%d(%d) ", p, v);
p = ver[p].p;
}
printf("\n");
Vertex struct:
int *to; // neighbor list
int nb; // how many neighbor
int p; // parent
short v; // was visited? 0 = false, 1 = true
It sounds like you are looking for "Strongly Connected Components" - so you are in luck, there is a well known algorithm for finding these in a graph. See Tarjan.
The algorithm is pretty well described in that article, but it's a bit long winded so I won't paste it here. Also, unless you are doing this for study you will probably be better off using an existing implementation, it's not that hard to implement but it's not that easy either.
EDIT. It looks like this question is actually a dupe... it pains me to say this but it probably needs to be closed, sorry. See Best algorithm for detecting cycles in a directed graph
You should use vertex coloring to avoid infinite loop in DFS.
At first all vertices are marked as WHITE. When you discover a vertex at first time (it is marked as WHITE) you should mark it is as GREY. If you discovered a GREY vertex you would find a loop.
I'd implement Edmond Karp algorithm, but seems it's not correct and I'm not getting correct flow, consider following graph and flow from 4 to 8:
Algorithm runs as follow:
First finds 4→1→8,
Then finds 4→5→8
after that 4→1→6→8
And I think third path is wrong, because by using this path we can't use flow from 6→8 (because it used), and in fact we can't use flow from 4→5→6→8.
In fact if we choose 4→5→6→8, and then 4→1→3→7→8 and then 4→1→3→7→8 we can gain better flow(40).
I Implemented algorithm from wiki sample code. I think we can't use any valid path and in fact this greedy selection is wrong.
Am I wrong?
Code is as below (in c#, threshold is 0, and doesn't affect the algorithm):
public decimal EdmondKarps(decimal[][] capacities/*Capacity matrix*/,
List<int>[] neighbors/*Neighbour lists*/,
int s /*source*/,
int t/*sink*/,
decimal threshold,
out decimal[][] flowMatrix
/*flowMatrix (A matrix giving a legal flowMatrix with the maximum value)*/
)
{
THRESHOLD = threshold;
int n = capacities.Length;
decimal flow = 0m; // (Initial flowMatrix is zero)
flowMatrix = new decimal[n][]; //array(1..n, 1..n) (Residual capacity from u to v is capacities[u,v] - flowMatrix[u,v])
for (int i = 0; i < n; i++)
{
flowMatrix[i] = new decimal[n];
}
while (true)
{
var path = new int[n];
var pathCapacity = BreadthFirstSearch(capacities, neighbors, s, t, flowMatrix, out path);
if (pathCapacity <= threshold)
break;
flow += pathCapacity;
//(Backtrack search, and update flowMatrix)
var v = t;
while (v != s)
{
var u = path[v];
flowMatrix[u][v] = flowMatrix[u][v] + pathCapacity;
flowMatrix[v][u] = flowMatrix[v][u] - pathCapacity;
v = u;
}
}
return flow;
}
private decimal BreadthFirstSearch(decimal[][] capacities, List<int>[] neighbors, int s, int t, decimal[][] flowMatrix, out int[] path)
{
var n = capacities.Length;
path = Enumerable.Range(0, n).Select(x => -1).ToArray();//array(1..n)
path[s] = -2;
var pathFlow = new decimal[n];
pathFlow[s] = Decimal.MaxValue; // INFINT
var Q = new Queue<int>(); // Q is exactly Queue :)
Q.Enqueue(s);
while (Q.Count > 0)
{
var u = Q.Dequeue();
for (int i = 0; i < neighbors[u].Count; i++)
{
var v = neighbors[u][i];
//(If there is available capacity, and v is not seen before in search)
if (capacities[u][v] - flowMatrix[u][v] > THRESHOLD && path[v] == -1)
{
// save path:
path[v] = u;
pathFlow[v] = Math.Min(pathFlow[u], capacities[u][v] - flowMatrix[u][v]);
if (v != t)
Q.Enqueue(v);
else
return pathFlow[t];
}
}
}
return 0;
}
The way to choose paths is not important.
You have to add edges of the path in reverse order with path capacity and reduce capacity of edges of the path by that value.
In fact this solution works:
while there is a path with positive capacity from source to sink{
find any path with positive capacity from source to sink, named P with capacity C.
add C to maximum_flow_value.
reduce C from capacity of edges of P.
add C to capacity of edges of reverse_P.
}
Finally the value of maximum-flow is sum of Cs in the loop.
If you want to see the flow in edges in the maximum-flow you made, you can retain the initial graph somewhere, the flow in edge e would be original_capacity_e - current_capacity_e.