Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am not able to understand difference between positive and negative cycle in a weighted graph. The thing that is used bellman ford and SPFA.
What is it?
What are the properties?
If you can please include diagrams/examples of each.
The graph has weights on its edges.
The weight of a cycle is the sum of the weights of the edges that make the cycle.
A cycle is called positive if it has positive weight. A cycle is called negative if it has negative weight.
Think of the graph as a network of roads. Each edge represents one road. You are commuting through this network by car. A positive weight on an edge means you have to pay a toll to drive through that road. A negative weight on an edge means you are given a refund when you drive through that road.
Usually you expect driving through the network will cost you money, because of all the positive weights. When trying to drive from point A to point B, you will try to find a path of minimal cost.
If there is a negative cycle in the graph, then that means a car driving around this cycle will actually gain money instead of paying it.
If I ask you "please find a path with minimal cost from point A to point B" in a graph that has a negative cycle, you might tell me "Well, first go from point A to that cycle. Then, drive around the cycle forever and ever. When you're tired and wealthy, go to point B."
In other words, there are paths with arbitrarily high negative weight. So asking for a path of minimal weight in a graph that has a negative cycle is meaningless, because driving around the negative cycle again and again will always reduce your cost further.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have a school question that I'm not sure what to code with. Lets say you have an undirected and unweighted graph G, which is a city road network. The nodes, n are intersections and m edges as the roads. Among the n nodes, there are h amount of hospitals. The question wants us to find for each node n, the distance from each node to the nearest hospital. Would it be possible to do using BFS or would djikstra be a better choice?
In addition, we would also need to propose a new algorithim that would find K amount of nearest hospitals nearest to each node with K being user input. In this case, is bfs still possible or is djikstra the only solution? Thank you.
The difference between Dijkstra and BFS is that with Dijkstra the queue is sorted so that closer nodes appear first.
In your case every edge has equal length and so this order comes automatically.
Thus, the algorithms are equal in this case.
Breadth-first search can be viewed as a special-case of Dijkstra's
algorithm on unweighted graphs, where the priority queue degenerates
into a FIFO queue.
https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have this problem bellow and I'm not seeing an effective solution but taking the brute force approach. Would anyone mind lending me a hand?
The problem consists of a graph G= (V,E) directed, weighted and acyclic. Edges have weights w(u,v). The value of w(u, v) depends only on the vertex of origin ( w(u,x)= w(u,y) if (u,x) and (u,y) exist ). Originally, each vertex may have multiple incoming and/ or outgoing edges. The goal is to maintain one outgoing edge per vertex at most in a way the total remaining weight is maximum. Vertices that have outgoing edges cannot have incoming ones. For example, consider figure 1. The left-side graph is the original one. Keeping at most one outgoing edge, the right-side graph represents a solution for maximum total weight, 17.
However, there is another constraint to this problem. Each vertex is assigned 2 values, capacity and load. Capacity says how much load it can have attached to. Capacity must be also taken into account while finding the maximum total weight configuration. Figure 2 shows the same graph as figure 1 but now the capacity constraint plays a decisive role. See that the maximum total weight configuration is different in this situation (right-side graph, figure 2).
In summary, there are 3 restrictions in order to get the maximum total weight:
Obey capacity limitation;
Vertices with outgoing edge don't have incoming ones;
Vertices have one outgoing edge at most.
The only solution I've come up is testing all possible configurations, checking if it is a valid and keeping track the maximum. Does anyone have a better approach to tackle this problem?
Your problem looks like knapsack problem to me: pick up a set of edges to maximize profit.
What you can do for sure, is using constraint satisfaction approach. For example, check this code that solves knapsack problem. Adapting it to your needs should be a rather simple task.
You don't need any matching algorithm with this approach - a solving procedure will build the best possible solution directly. However, it might take a lot of time/memory for larger graphs (thousands edges/nodes).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to implement the A* algorithm and I have read about the heuristic function and how it works and I understand that an underestimate is needed to obtain an optimal path . But what heuristic function is the most suited for a random directed graph ? What I have tried so far is is taking the smallest edge weight from a node to the goal. As clearly the distance from a current node to the goal is not smaller than the smallest edge from a current node to the next.
The Manhattan distance only works when you have a well-defined distance metric that you can apply to pairs of nodes, such as with points in a 2D plane. For a graph, there's no inherent way to get the distance between two nodes.
With the little information available to you from the problem definition, I don't think you will do much better than using the heuristic that assumes all unseen edges have weight equal to the smallest weight in the graph.
You could get a bit more advanced if you sorted all the edges by weight. Then, as you see edges with particular weights during A*, you can remove them from the sorted list. This will let you know a running value of what the smallest remaining edge weight could be.
I am struggling with the following problem:
Given a directed graph with 3<=N<=1000 vertices and 3<=M<=1000000 edges you can choose a simple cycle of this graph and walk it.While you are walking the cycle at each edge you are asked a question if you answer correctly your money is doubled else it's halved.
Let's say you have D dollars and the chance you answer correctly a question at an edge e_i is p_i, then the expected money to have after answering that question is :
2*Dp_i+1/2(D(1-p_i))=D(1/2+3*p_i/2)
Find if there is a simple cycle in the given graph which you can walk and the expected money you will have after walking it is more that the money you started with.
My approach is to use Johnson's algorithm to find all simple cycles and then check if there is any cycle for which the expected money is more than the one you are starting with but I keep getting time-outs. Am I missing something ? Is there an observation I have to make or should I just try to optimize my code more?
The trick in this problem is to reduce this to negative cycle detection.
If you start with x amount of money from a vertex and go around a cycle e_1,e_2,…e_k and get back, you will end up with x*f_1*f_2*…f_k where f_i=(1/2+3*p(e_i)/2). What you want is f_1*f_2*…f_k > 1. But this is the same as having ln(f_1) + ln(f_2) … ln(f_k) > 0.
So make a graph with edge weights as -ln(f_k). The problem then reduces to negative cycle detection, which can be done using some algorithm like Bellman-Ford.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am wring thesis about shortest path algorithms.
And i don't understand one thing...
I have made visualisation of dijkstras algorithm.
1) Is it correct ? Or am i doing something wrong?
2) How would look Bellman-Ford algorithm? As fas as i have looked for difference, i found "Bellman-ford: the basic idea is very similar to Dijkstra's, but instead of selecting the shortest distance neighbour edges, it select all the neighbour edges." But also dijkstra checks all vertexes and all edges, isnt it?
dijkstra assumes that the cost of paths is montonically increasing. that plus the ordered search (using the priority queue) mans that when you first reach a node, you have arrived via the shortest path.
this is not true with negative weights. if you use dijkstra with negative weights then you may find a later path is better than an earlier one (because a negative weight improved the path on a later step).
so in bellman-ford, when you arrive at a node you test to see if the new path is shorter. in contrast, with dijkstra, you can cull nodes
in some (most) cases dijkstra will not explore all complete paths. for example, if G linked only back to C then any path through G would be higher cost that any through C. bellman-ford would still consider all paths through G to F (dijkstra would never look at those because they are of higher cost that going through C). if it does not do this it can't guarantee finding negative loops.
here's an example: the above never calculates the path AGEF. E has already been marked as visited by the time you arrive from G.
I am also thinking the same
Dijkstra's algorithm solves the single-source shortest-path problem when all edges have non-negative weights. It is a greedy algorithm and similar to Prim's algorithm. Algorithm starts at the source vertex, s, it grows a tree, T, that ultimately spans all vertices reachable from S. Vertices are added to T in order of distance i.e., first S, then the vertex closest to S, then the next closest, and so on.