Bellman-Ford Algorithm - algorithm

I know that Bellman-Ford algorithm takes at most |V| - 1 iterations to find the shortest path if the graph does not contain a negative weight cycle. Is there a way to modify Bellman-Ford algorithm so it will find the shortest path in 1 iteration?

No, worst-case running time of Bellman-Ford is O(E*V) which comes because of the necessity to iterate over the graph over V-1 times. However, we can practically improve Bellman-Ford to a running time of O(E+V) by using a queue-based bellman-ford variant.
Here's the queue-based Bellman-Ford implementation. Code inspired from the booksite Algorithms, 4th edition, Robert Sedgewick and Kevin Wayne
private void findShortestPath(int src) {
queue.add(src);
distTo[src] = 0;
edgeTo[src] = -1;
while (!queue.isEmpty()) {
int v = queue.poll();
onQueue[v] = false;
for (Edge e : adj(v)){
int w = e.dest;
if (distTo[w] > distTo[v] + e.weight) {
distTo[w] = distTo[v] + e.weight;
edgeTo[w] = v;
}
if (!onQueue[w]) {
onQueue[w] = true;
queue.add(w);
}
//Find if a negative cycle exists after every V passes
if (cost++ % V == 0) {
if (findNegativeCycle())
return;
}
}
}
}
The worst case running time of this algorithm is still O(E*V) but, in this algorithm typically runs in O(E+V) practically.

Related

Bellman Ford Algorithm fails to compute shortest path for a directed edge-weighted graph

I was recently understanding shortest path algorithms when I encountered the problem below in the book Algorithms, 4th edition, Robert Sedgewick and Kevin Wayne.
Suppose that we convert an EdgeWeightedGraph into a Directed EdgeWeightedGraph by creating two DirectedEdge objects in the EdgeWeightedDigraph (one in each direction) for each Edge in the EdgeWeightedGraph and then use the Bellman-Ford algorithm. Explain why this approach fails spectacularly.
Below is a piece of my code implementing Bellman Ford Algorithm (queue-based):
private void findShortestPath(int src) {
queue.add(src);
distTo[src] = 0;
edgeTo[src] = -1;
while (!queue.isEmpty()) {
int v = queue.poll();
onQueue[v] = false;
for (Edge e : adj(v)){
int w = e.dest;
if (distTo[w] > distTo[v] + e.weight) {
distTo[w] = distTo[v] + e.weight;
edgeTo[w] = v;
}
if (!onQueue[w]) {
onQueue[w] = true;
queue.add(w);
}
//Find if a negative cycle exists after every V passes
if (cost++ % V == 0) {
if (findNegativeCycle())
return;
}
}
}
}
I've tried many examples on paper, but am unable to find scenarios where the directed graph generated would have new negative cycles in them, simply by converting an edge into two edges in opposite directions. I assume that there were no pre-existing negative cycles in the unweighted undirected edge-weighted graph.
Your algorithm & code has no problem.It can give the shortest distance in a graph without negative circle.And it's easy to find out negative circle only by a array 'number[i]' record the number that 'i' is put into the queue.
It can be proved that if a point is put into the queue for over |P|(points number) times,there is a negative circle in the graph.So you can add:
number[v]++;
if (number[v] > |P|)
return -1;
The shortest distance in a graph with negative circle has no meaning.So for most of the situation you just need to print something while finding it.

global mini-cut algorithm which outputs exact edges

I am looking for an algorithm to find global mini-cut in a undirected graph.
I want to input a graph and algorithm output minimum number of the edges by cutting them the given graph can be partitioned into two parts.
Here is requirement:
find exact edges, not only their number.
the min-cut edges should compute with 100% correctly.
it is an undirected graph.
the algorithm shall terminate by indicating it found the answer or not found the answer.
I searched some articles on Internet and find out that Karger's minimum cut algorithm is randomized one, its output maybe be not the exact min-cuts. I don't algorithm like that.
I want to compute exact edges(I need to know which edges they are) whose number is the smallest.
I would like to hear some advice, while I am looking for such algorithms.
It would be great if your advice comes with introduction to the algorithm with example codes.
Thanks in advance.
We can do this using max flow algorithm, when we calculate the max-flow of a graph the edges that are saturated when the algorithm finishes are part of the min-cut. You can read more about the Max flow Min cut theorem . Calculating the max flow is a fairly standard problem, there are a lot of polynomial time solutions available for that problem you can read more about them here.
So to summarize the algorithm, we first find the max flow of the graph, then the edges in the min cut are the ones where the flow on the edge is equal to the capacity of that edge (those edges that have been saturated).
So one of the ways to solve the max flow problem is using the Ford Fulkerson algorithm, it finds augmenting path in the graph, then using the minimum edge in the augmenting path tries to saturate that augmenting path, it repeats this process untill no augmenting path's are left.
To find the augmenting path we can either do a Depth first Search or a Breadth First Search. The Edmonds Karp's algorithm finds an augmenting path by using a simple Breadth first Search.
I have written c++ code below to find the max flow and min cut, the code to find max flow is taken from the book "Competetive Programming by Steven Halim".
Also note that since the graph is undirected all the edges that the min cut function print's would be twice as it prints a -- b and b -- a.
#include<iostream>
#include<vector>
#include<queue>
#include<utility>
#define MAXX 100
#define INF 1e9
using namespace std;
int s, t, flow, n, dist[MAXX], par[MAXX], AdjMat[MAXX][MAXX]; //Adjacency Matrix graph
vector< pair<int, int> > G[MAXX]; //adjacency list graph
void minCut(){
for(int i = 0;i < n;i++){
for(int j = 0;j < G[i].size();j++){
int v = G[i][j].first;
if(AdjMat[i][v] == 0){ //saturated edges
cout << i << " " << v << endl;
}
}
}
}
void augmentPath(int v, int minEdge){
if(v == s){ flow = minEdge;return; }
else if(par[v] != -1){
augmentPath(par[v], min(minEdge, AdjMat[par[v]][v]));
AdjMat[par[v]][v] -= flow; //forward edges
AdjMat[v][par[v]] += flow; //backward edges
}
}
void EdmondsKarp(){
int max_flow = 0;
for(int i= 0;i < n;i++) dist[i] = -1, par[i] = -1;
while(1){
flow = 0;
queue<int> q;
q.push(s);dist[s] = 0;
while(!q.empty()){
int u = q.front();q.pop();
if(u == t) break;
for(int i = 0;i < G[u].size();i++){
int v = G[u][i].first;
if(AdjMat[u][v] > 0 && dist[v] == -1){
dist[v] = dist[u] + 1;
q.push(v);
par[v] = u;
}
}
}
augmentPath(t, INF);
if(flow == 0) break; //Max flow reached, now we have saturated edges that form the min cut
max_flow += flow;
}
}
int main(){
//Create the graph here, both as an adjacency list and adjacency matrix
//also mark the source i.e "s" and sink "t", before calling max flow.
return 0;
}

How to calculate time complexity of recursions with memoization?

What is time complexity of the code below? I know it has recursive call multiple times so it should probably be 3^n, but still each time it initializes array of length n, which is latter used and it kinda confuses me. What should be time complexity if we would add additional array to apply memoization? This below is solution for Hackerrank Java 1D Array (Hard) task.
public static boolean solve(int n, int m, int[] arr, boolean[] visited, int curr) {
if (curr + m >= n || curr + 1 == n) {
return true;
}
boolean[] newVisited = new boolean[n];
for (int i = 0; i < n; i++) {
newVisited[i] = visited[i];
}
boolean s = false;
if (!visited[curr+1] && arr[curr+1] == 0) {
newVisited[curr+1] = true;
s = solve(n,m,arr,newVisited,curr+1);
}
if (s) {
return true;
}
if (m > 1 && arr[curr+m] == 0 && !visited[curr+m]) {
newVisited[curr+m] = true;
s = solve(n,m,arr,newVisited,curr+m);
}
if (s) {
return true;
}
if (curr > 0 && arr[curr-1] == 0 && !visited[curr-1]) {
newVisited[curr-1] = true;
s = solve(n,m,arr,newVisited,curr-1);
}
return s;
}
Your implementation does indeed seem to have exponential complexity. I did not really think about this part of your question. It is perhaps a bit tedious to come up with a worst case scenario. But one "at-least-pretty-bad" scenario would be to have the first n-m elements in arr set to 0 and the last m elements set to 1. A lot of branching right there, not really making use of the memoization mechanism. I would guess that your solution it is at least exponential in n/m.
Here is another solution. We can rephrase the problem as a graph one. Let the elements in your array be the vertices of a directed graph and let there be an edge in between every pair of vertices of one of the following forms: (x,x-1), (x,x+1) and (x,x+m), if both ends of such an edge have value 0. Add an additional vertex t to your graph. Also add an edge from every vertex with value 0 in {n-m+1,n-m+2,...,n} to t. So we have no more than 3n+m edges in our graph. Now, your problem is equivalent to determining if there is a path from vertex 0 to t in the graph we have just constructed. This can be achieved by running a Depth First Search starting from vertex 0, having complexity O(|E|), which in our case is O(n+m).
Coming back to your solution, you are doing pretty much the same thing (perhaps without realizing it). The only real difference is that you are copying the visited array into newVisited and thus never really using all that memoization :p So, just eliminate newVisited, use visited wherever you are using newVisited and check what happens.

Complexity of non-recursiveDFS code

I think the complexity of this code is:
Time : O (v) : v is the vertex
Space: O (v) : v is the vertex
public void dfs() {
Stack<Integer> stack = new Stack<Integer>();
stack.add(source);
while (!stack.empty()) {
int vertex = stack.pop();
System.out.println(" print v: " + vertex);
for (int v : graph.adj(vertex)) {
if (!visited[v]) {
visited[v] = true;
stack.add(v);
edgeTo[v] = vertex;
}
}
}
}
Please correct me if I am wrong
Assuming that graph.adj() always produce a bounded number of vertices (maybe just one), then you are right.
However, if it depends in any way on the total number of vertices present in the system, then it is not. If this dependency is linear, then the algorithm is O(n^2).
Generalizing, if f(n) is the average number of graph.adj() per vertex, then the answer is O(n*f(n)).
You are traversing the adjacency matrix of each unvisited node, and each node is visited exactly once. Thus, you are in effect visiting each edge once, and thus the complexity is O(E), which can be as much as O(v^2) in worst case.

Algorithm complexity for minimum number of clique in a graph

I have written an algorithm which solves the minimum number of clique in a graph. I have tested my backtracking algorithm, but I couldn't calculate the worst case time complexity, I have tried a lot of times.
I know that this problem is an NP hard problem, but I think is it possible to give a worst time complexity based on the code. What is the worst time complexity for this code? Any idea? How you formalize the recursive equation?
I have tried to write understandable code. If you have any question, write a comment.
I will be very glad for tips, references, answers.
Thanks for the tips guys:).
EDIT
As M C commented basically I have tried to solve this problem Clique cover problem
Pseudocode:
function countCliques(graph, vertice, cliques, numberOfClique, minimumSolution)
for i = 1 .. number of cliques + 1 new loop
if i > minimumSolution then
return;
end if
if (fitToClique(cliques(i), vertice, graph) then
addVerticeToClique(cliques(i), vertice);
if (vertice == 0) then //last vertice
minimumSolution = numberOfClique
printResult(result);
else
if (i == number of cliques + 1) then // if we are using a new clique the +1 always a new clique
countCliques(graph, vertice - 1, cliques, number of cliques + 1, minimum)
else
countCliques(graph, vertice - 1, cliques, number of cliques, minimum)
end if
end if
deleteVerticeFromClique(cliques(i), vertice);
end if
end loop
end function
bool fitToClique(clique, vertice, graph)
for ( i = 1 .. cliqueSize) loop
verticeFromClique = clique(i)
if (not connected(verticeFromClique, vertice)) then
return false
end if
end loop
return true
end function
Code
int countCliques(int** graph, int currentVertice, int** result, int numberOfSubset, int& minimum) {
// if solution
if (currentVertice == -1) {
// if a better solution
if (minimum > numberOfSubset) {
minimum = numberOfSubset;
printf("New minimum result:\n");
print(result, numberOfSubset);
}
c++;
} else {
// if not a solution, try to insert to a clique, if not fit then create a new clique (+1 in the loop)
for (int i = 0; i < numberOfSubset + 1; i++) {
if (i > minimum) {
break;
}
//if fit
if (fitToSubset(result[i], currentVertice, graph)) {
// insert
result[i][0]++;
result[i][result[i][0]] = currentVertice;
// try to insert the next vertice
countCliques(graph, currentVertice - 1, result, (i == numberOfSubset) ? (i + 1) : numberOfSubset, minimum);
// delete vertice from the clique
result[i][0]--;
}
}
}
return c;
}
bool fitToSubset(int *subSet, int currentVertice, int **graph) {
int subsetLength = subSet[0];
for (int i = 1; i < subsetLength + 1; i++) {
if (graph[subSet[i]][currentVertice] != 1) {
return false;
}
}
return true;
}
void print(int **result, int n) {
for (int i = 0; i < n; i++) {
int m = result[i][0];
printf("[");
for (int j = 1; j < m; j++) {
printf("%d, ",result[i][j] + 1);
}
printf("%d]\n", result[i][m] + 1);
}
}
int** readFile(const char* file, int& v, int& e) {
int from, to;
int **graph;
FILE *graphFile;
fopen_s(&graphFile, file, "r");
fscanf_s(graphFile,"%d %d", &v, &e);
graph = (int**)malloc(v * sizeof(int));
for (int i = 0; i < v; i ++) {
graph[i] = (int*)calloc(v, sizeof(int));
}
while(fscanf_s(graphFile,"%d %d", &from, &to) == 2) {
graph[from - 1][to - 1] = 1;
graph[to - 1][from - 1] = 1;
}
fclose(graphFile);
return graph;
}
The time complexity of your algorithm is very closely linked to listing compositions of an integer, of which there are O(2^N).
The compositions alone is not enough though, as there is also a combinatorial aspect, although there are rules as well. Specifically, a clique must contain the highest numbered unused vertex.
An example is the composition 2-2-1 (N = 5). The first clique must contain 4, reducing the number of unused vertices to 4. There is then a choice between 1 of 4 elements, unused vertices is now 3. 1 element of the second clique is known, so 2 unused vertices. Thus must be a choice between 1 of 2 elements decides the final vertex in the second clique. This only leaves a single vertex for the last clique. For this composition there are 8 possible ways it could be made, given by (1*C(4,1)*1*C(2,1)*1). The 8 possible ways are as followed:
(5,4),(3,2),(1)
(5,4),(3,1),(2)
(5,3),(4,2),(1)
(5,3),(4,1),(2)
(5,2),(4,3),(1)
(5,2),(4,1),(3)
(5,1),(4,3),(2)
(5,1),(4,2),(3)
The above example shows the format required for the worst case, which is when the composition contains the as many 2s as possible. I'm thinking this is still O(N!) even though it's actually (N-1)(N-3)(N-5)...(1) or (N-1)(N-3)(N-5)...(2). However, it is impossible as it would as shown require a complete graph, which would be caught right away, and limit the graph to a single clique, of which there is only one solution.
Given the variations of the compositions, the number of possible compositions is probably a fair starting point for the upper bound as O(2^N). That there are O(3^(N/3)) maximal cliques is another bit of useful information, as the algorithm could theoretically find all of them. Although that isn't good enough either as some maximal cliques are found multiple times while others not at all.
A tighter upper bound is difficult for two main reasons. First, the algorithm progressively limits the max number of cliques, which I suppose you could call the size of the composition, which puts an upper limit on the computation time spent per clique. Second, missing edges cause a large number of possible variations to be ignored, which almost ensures that the vast majority of the O(N!) variations are ignored. Combined with the above paragraph, makes putting the upper bound difficult. If this isn't enough for an answer, you might want to take the question to math area of stack exchange as a better answer will require a fair bit of mathematical analysis.

Resources