Is my approach correct? - algorithm

I am struggling with the following problem:
Given a directed graph with 3<=N<=1000 vertices and 3<=M<=1000000 edges you can choose a simple cycle of this graph and walk it.While you are walking the cycle at each edge you are asked a question if you answer correctly your money is doubled else it's halved.
Let's say you have D dollars and the chance you answer correctly a question at an edge e_i is p_i, then the expected money to have after answering that question is :
2*Dp_i+1/2(D(1-p_i))=D(1/2+3*p_i/2)
Find if there is a simple cycle in the given graph which you can walk and the expected money you will have after walking it is more that the money you started with.
My approach is to use Johnson's algorithm to find all simple cycles and then check if there is any cycle for which the expected money is more than the one you are starting with but I keep getting time-outs. Am I missing something ? Is there an observation I have to make or should I just try to optimize my code more?

The trick in this problem is to reduce this to negative cycle detection.
If you start with x amount of money from a vertex and go around a cycle e_1,e_2,…e_k and get back, you will end up with x*f_1*f_2*…f_k where f_i=(1/2+3*p(e_i)/2). What you want is f_1*f_2*…f_k > 1. But this is the same as having ln(f_1) + ln(f_2) … ln(f_k) > 0.
So make a graph with edge weights as -ln(f_k). The problem then reduces to negative cycle detection, which can be done using some algorithm like Bellman-Ford.

Related

Dijkstra's Algorithm negative edge

Can someone help me with this question
I am still confused weather Dijkstra's algorithm works with negative edges or not
this question is from Grokking Algorithms book and in its errata it is said that this question has a possible answer
how does it has a possible answer with a negative edge?
First of all Dijkstra's algorithm won't work on all negative weighted graphs, but on this particular case you can see that the negative weight not make any different on the shortest way to get from the starting point to any other point on the graph. this is why you will get the same result with the negative edge or without.
The idea why Dijkstra's won't work on negative weighted graphs is explained pretty well here (thanks to Amit) -> Why doesn't Dijkstra's algorithm work for negative weight edges?
It's been literally 10 years since I last touched anything to do with Dijkstra, but I think this one doesn't break because every time you pass over the negative portion, the loop still increases the sum (minus 1 vs several plus 2). This way, the path with the negative is discarded. But change that minus 1 to minus 5 and then you have a problem.
I'm not on my PC right now to apply this graph to my 10 year old homework, but I'm pretty sure that's what's going on.

How to create an algorithm to find the shortest cycle containing an edge?

Hi so I'm working on a review for my algorithms class and I'm having trouble with one of the questions on the review. I really don't even know where to begin with it, so anything will help. Thanks!
Question
Describe an algorithm to find the shortest cycle containing a given edge e = (u,v) in a directed graph with positive edge lengths. If no cycle containing e exists, the algorithm should determine so. Algorithm should run in O(|v|^2)
Progressing Solution:
A simple idea to get you started would be to use DFS (Depth First Search) on the graph along with keeping a track of all the nodes you have been visiting using a vector to keep track of nodes currently in recursion loop. If you visit a node which is present in recursion vector, then there is a cycle. To get length of the cycle, you can check the length of the recursion vector. However, this need not be the smallest cycle, so don't terminate your algorithm. Choose the minimum of all the values you get after applying this algorithm to the entire graph.
I may have skipped over some important details, but they are for you to figure out.
For more help, refer to this link: http://www.geeksforgeeks.org/detect-cycle-in-a-graph/

Travelling by bus

If you have the full bus schedule for a country, how can you find the
furthest anyone can travel in one day without visiting the same stop twice?
I assume a bus schedule gives you the full list of leaving and arriving times for every bus stop.
A slow and naive method would be as follows.
You can of course make a graph from the bus schedule with multiple directed edges between bus stops. You could then do a depth first search remembering the arrival time of the edge you took to get to each node and only taking edges from that stop that leave after the one that you took to get there. If you go to a node you have been to before you would only carry on from there if the current time in your traversal is before the earliest time you had ever visited that node before. You could record the furthest you can get from each node and then you could check each node to find the furthest you can travel overall.
This seems very inefficient however and it really isn't a normal graph problem. The problem is that in a normal directed graph if you can get from A to B and from B to C then you can get from A to C. This isn't true here.
What is the fastest you can solve this problem?
I think your original algorithm is pretty good.
You can think of your approach as being a version of Dijkstra's algorithm, in attempting to find the shortest path to each node.
Note that it is best at this stage to weight edges in the graph in terms of time. The idea is to use your Dijkstra-like algorithm to compute all nodes reachable within 1 days worth of time, and then pick whichever of these nodes is furthest in space from the start point.
Implementations of Dijkstra can use a heap to retrieve the next node to explore in O(logn), and I think this would be a good enhancement to your approach as well. If you always choose the node that you can reach earliest, you never need to repeat the calculation for that node.
Overall the approach is:
For each starting point
Use a modified Dijkstra to compute all nodes reachable in 1 day
Find the furthest in space of all these nodes.
So for n starting points and e bus routes, the complexity is about O(n(n+e)log(n)) to get the optimal answer.
You should be able to get improved performance by using an appropriate heuristic in an A* search. The heuristic needs to underestimate the max distance possible from a point, so you could use the maximum speed of a bus multiplied by the remaining time.
Instead of making multiple edges for each departure from a location, you can make multiple nodes per location / time.
Create one node per location per departure time.
Create one node per location per arrival time.
Create edges to connect departures to arrivals.
Create edges to connect a given node to the node belonging to the same location at the nearest future time.
By doing this, any path you can traverse through the graph is "valid" (meaning a traveler would be able to achieve this by a combination of bus trips or choosing to sit at a location and wait for a future bus).
Sorry to say, but as described this problem has a pretty high complexity. Misread the problem originally and thought it was np-hard, but it is not. It does however have a pretty high complexity that I personally would not want to deal with. This algorithm is a pretty good approximation that give a considerable complexity savings that I personally think it worth it.
However, if all you want is an answer that is "pretty good" there are are lot of fairly efficient algorithms out there that will get close very quickly.
Personally I would suggest using a simple greedy algorithm here.
I've done this on a few (granted, small and contrived) examples and it's worked pretty well and has an nlog(n) efficiency.
Associate a velocity with each node, velocity being the fastest you can move away from a given node. In my examples this velocity was distance_travelled/(wait_time + travel_time). I used the maximum velocity of all trips leaving a node as the velocity score for that node.
From your node/time calculate the velocities of all neighboring nodes and travel to the "fastest" node.
This algorithm is pretty good for the complexity as it basically transforms the problem into a static search, but there are a couple potential pitfalls that could be adjusted for depending on your data set.
The biggest issue with this algorithm is the possibility of a really fast bus going into the middle of nowhere. You could get around that by adding a "popularity" term to the velocity calculation (make more popular stops effectively faster) but depending on your data set that could easily make things either better or worse.
The simplistic graph representation will not work. I. e. each city is a node and the edges represent time. That's because the "edge" is not always active -- it is only active at certain times of the day.
The second thing that comes to mind is Edward Tufte's Paris Train Schedule which is a different kind of graph. But that does not quite fit the problem either. With the train schedule, the stations have a sequential relationship between stations, but that's not the case in general with cities and bus schedules.
But Tufte motivates the following way to model it as a graph. You could write code only to construct the graph and use a standard graph library that includes the shortest path algorithm.
Each bus trip is an edge with weight = distance covered
Each (city, departure) and (city, arrival) is a node
All nodes for a given city are connected by zero-weight edges in a time-ordered sequence, ignoring whether it is an arrival or a departure. This subgraph will look like a chain.
(it is a directed graph)
Linear Time Solution: Note that the graph will be a directed, acyclic graph. Finding the longest path in such a graph is linear. "A longest path between two given vertices s and t in a weighted graph G is the same thing as a shortest path in a graph −G derived from G by changing every weight to its negation. Therefore, if shortest paths can be found in −G, then longest paths can also be found in G."
Hope this helps! If somebody can post a visualization of the graph, it would be nice. If I can do so myself, I will do 1 more edit.
Naive is the best you'll get -- http://en.wikipedia.org/wiki/Longest_path_problem
EDIT:
So the problem is two fold.
Create a list of graphs where its possible to travel from pointA to pointB. Possible is in terms of times available for busA to travel from pointA to pointB.
Find longest path from all the possible generated path above.
Another approach would be to reevaluate the graph upon each node traversal and find the longest path.
It still reduces to finding longest possible path, which is NP-Hard.

How to find minimum number of transfers for a metro or railway network?

I am aware that Dijkstra's algorithm can find the minimum distance between two nodes (or in case of a metro - stations). My question though concerns finding the minimum number of transfers between two stations. Moreover, out of all the minimum transfer paths I want the one with the shortest time.
Now in order to find a minimum-transfer path I utilize a specialized BFS applied to metro lines, but it does not guarantee that the path found is the shortest among all other minimum-transfer paths.
I was thinking that perhaps modifying Dijkstra's algorithm might help - by heuristically adding weight (time) for each transfer, such that it would deter the algorithm from making transfer to a different line. But in this case I would need to find the transfer weights empirically.
Addition to the question:
I have been recommended to add a "penalty" to each time the algorithm wants to transfer to a different subway line. Here I explain some of my concerns about that.
I have put off this problem for a few days and got back to it today. After looking at the problem again it looks like doing Dijkstra algorithm on stations and figuring out where the transfer occurs is hard, it's not as obvious as one might think.
Here's an example:
If here I have a partial graph (just 4 stations) and their metro lines: A (red), B (red, blue), C (red), D (blue). Let station A be the source.
And the connections are :
---- D(blue) - B (blue, red) - A (red) - C (red) -----
If I follow the Dijkstra algorithm: initially I place A into the queue, then dequeue A in the 1st iteration and look at its neighbors :
B and C, I update their distances according to the weights A-B and A-C. Now even though B connects two lines, at this point I don't know
if I need to make a transfer at B, so I do not add the "penalty" for a transfer.
Let's say that the distance between A-B < A-C, which causes on the next iteration for B to be dequeued. Its neighbor is D and only at this
point I see that the transfer had to be made at B. But B has already been processed (dequeued). S
So I am not sure how this "delay" in determining the need for transfer would affect the integrity of the algorithm.
Any thoughts?
You can make each of your weights a pair: (# of transfers, time). You can add these weights in the obvious way, and compare them in lexicographic order (compare # of transfers first, use time as the tiebreaker).
Of course, as others have mentioned, using K * (# of transfers) + time for some large enough K produces the same effect as long as you know the maximum time apriori and you don't run out of bits in your weight storage.
I'm going to be describing my solution using the A* Algorithm, which I consider to be an extension (and an improvement -- please don't shoot me) of Dijkstra's Algorithm that is easier to intuitively understand. The basics of it goes like this:
Add the starting path to the priority queue, weighted by distance-so-far + minimum distance to goal
Every iteration, take the lowest weighted path and explode it into every path that is one step from it (discarding paths that wrap around themselves) and put it back into the queue. Stop if you find a path that ends in the goal.
Instead of making your weight simply distance-so-far + minimum-distance-to-goal, you could use two weights: Stops and Distance/Time, compared this way:
Basically, to compare:
Compare stops first, and report this comparison if possible (i.e., if they aren't the same)
If stops are equal, compare distance traveled
And sort your queue this way.
If you've ever played Mario Party, think of stops as Stars and distance as Coins. In the middle of the game, a person with two stars and ten coins is going to be above someone with one star and fifty coins.
Doing this guarantees that the first node you take out of your priority queue will be the level that has the least amount of stops possible.
You have the right idea, but you don't really need to find the transfer weights empirically -- you just have to ensure that the weight for a single transfer is greater than the weight for the longest possible travel time. You should be pretty safe if you give a transfer a weight equivalent to, say, a year of travel time.
As Amadan noted in a comment, it's all about creating right graph. I'll just describe it in more details.
Consider two vertexes (stations) to have edge if they are on a single line. With this graph (and weights 1) you will find minimum number of transitions with Dijkstra.
Now, lets assume that maximum travel time is always less 10000 (use your constant). Then, weight of edge AB (A and B are on one line) is a time_to_travel_between(A, B) + 10000.
Running Dijkstra on such graph will guarantee that minimal number of transitions is used and minimum time is reached in the second place.
update on comment
Let's "prove" it. There're two solution: with 2 transfers and 40 minutes travel time and with 3 transfers and 25 minutes travel time. In first case you travel on 3 lines, so path weight will be 3*10000 + 40. In second: 4*10000 + 25. First solution will be chosen.
I had the same problem as you, until now. I was using Dijkstra. The penalties for transfers is a very good idea indeed and I've been using it for a while now. The main problem is that you cannot use it directly in the weight as you first you have to identify the transfer. And I didn't want to modify the algorithm.
So what I'be been doing, is that each time and you find a transfer, delete the node, add it with the penalty weight and rerun the graph.
But this way I found out that Dijkstra wont work. And this is where I tried Floyd-Warshall which au contraire to Dijkstra compares all possible paths through the graph between each pair of vertices.
It helped me with my problem switching to Floyd-Warshall. Hope it helps you as well.
Its easier to code and lot more easier to implement.

Algorithm to find two points furthest away from each other

Im looking for an algorithm to be used in a racing game Im making. The map/level/track is randomly generated so I need to find two locations, start and goal, that makes use of the most of the map.
The algorithm is to work inside a two dimensional space
From each point, one can only traverse to the next point in four directions; up, down, left, right
Points can only be either blocked or nonblocked, only nonblocked points can be traversed
Regarding the calculation of distance, it should not be the "bird path" for a lack of a better word. The path between A and B should be longer if there is a wall (or other blocking area) between them.
Im unsure on where to start, comments are very welcome and proposed solutions are preferred in pseudo code.
Edit: Right. After looking through gs's code I gave it another shot. Instead of python, I this time wrote it in C++. But still, even after reading up on Dijkstras algorithm, the floodfill and Hosam Alys solution, I fail to spot any crucial difference. My code still works, but not as fast as you seem to be getting yours to run. Full source is on pastie. The only interesting lines (I guess) is the Dijkstra variant itself on lines 78-118.
But speed is not the main issue here. I would really appreciate the help if someone would be kind enough to point out the differences in the algorithms.
In Hosam Alys algorithm, is the only difference that he scans from the borders instead of every node?
In Dijkstras you keep track and overwrite the distance walked, but not in floodfill, but thats about it?
Assuming the map is rectangular, you can loop over all border points, and start a flood fill to find the most distant point from the starting point:
bestSolution = { start: (0,0), end: (0,0), distance: 0 };
for each point p on the border
flood-fill all points in the map to find the most distant point
if newDistance > bestSolution.distance
bestSolution = { p, distantP, newDistance }
end if
end loop
I guess this would be in O(n^2). If I am not mistaken, it's (L+W) * 2 * (L*W) * 4, where L is the length and W is the width of the map, (L+W) * 2 represents the number of border points over the perimeter, (L*W) is the number of points, and 4 is the assumption that flood-fill would access a point a maximum of 4 times (from all directions). Since n is equivalent to the number of points, this is equivalent to (L + W) * 8 * n, which should be better than O(n2). (If the map is square, the order would be O(16n1.5).)
Update: as per the comments, since the map is more of a maze (than one with simple obstacles as I was thinking initially), you could make the same logic above, but checking all points in the map (as opposed to points on the border only). This should be in order of O(4n2), which is still better than both F-W and Dijkstra's.
Note: Flood filling is more suitable for this problem, since all vertices are directly connected through only 4 borders. A breadth first traversal of the map can yield results relatively quickly (in just O(n)). I am assuming that each point may be checked in the flood fill from each of its 4 neighbors, thus the coefficient in the formulas above.
Update 2: I am thankful for all the positive feedback I have received regarding this algorithm. Special thanks to #Georg for his review.
P.S. Any comments or corrections are welcome.
Follow up to the question about Floyd-Warshall or the simple algorithm of Hosam Aly:
I created a test program which can use both methods. Those are the files:
maze creator
find longest distance
In all test cases Floyd-Warshall was by a great magnitude slower, probably this is because of the very limited amount of edges that help this algorithm to achieve this.
These were the times, each time the field was quadruplet and 3 out of 10 fields were an obstacle.
Size Hosam Aly Floyd-Warshall
(10x10) 0m0.002s 0m0.007s
(20x20) 0m0.009s 0m0.307s
(40x40) 0m0.166s 0m22.052s
(80x80) 0m2.753s -
(160x160) 0m48.028s -
The time of Hosam Aly seems to be quadratic, therefore I'd recommend using that algorithm.
Also the memory consumption by Floyd-Warshall is n2, clearly more than needed.
If you have any idea why Floyd-Warshall is so slow, please leave a comment or edit this post.
PS: I haven't written C or C++ in a long time, I hope I haven't made too many mistakes.
It sounds like what you want is the end points separated by the graph diameter. A fairly good and easy to compute approximation is to pick a random point, find the farthest point from that, and then find the farthest point from there. These last two points should be close to maximally separated.
For a rectangular maze, this means that two flood fills should get you a pretty good pair of starting and ending points.
I deleted my original post recommending the Floyd-Warshall algorithm. :(
gs did a realistic benchmark and guess what, F-W is substantially slower than Hosam Aly's "flood fill" algorithm for typical map sizes! So even though F-W is a cool algorithm and much faster than Dijkstra's for dense graphs, I can't recommend it anymore for the OP's problem, which involves very sparse graphs (each vertex has only 4 edges).
For the record:
An efficient implementation of Dijkstra's algorithm takes O(Elog V) time for a graph with E edges and V vertices.
Hosam Aly's "flood fill" is a breadth first search, which is O(V). This can be thought of as a special case of Dijkstra's algorithm in which no vertex can have its distance estimate revised.
The Floyd-Warshall algorithm takes O(V^3) time, is very easy to code, and is still the fastest for dense graphs (those graphs where vertices are typically connected to many other vertices). But it's not the right choice for the OP's task, which involves very sparse graphs.
Raimund Seidel gives a simple method using matrix multiplication to compute the all-pairs distance matrix on an unweighted, undirected graph (which is exactly what you want) in the first section of his paper On the All-Pairs-Shortest-Path Problem in Unweighted Undirected Graphs
[pdf].
The input is the adjacency matrix and the output is the all-pairs shortest-path distance matrix. The run-time is O(M(n)*log(n)) for n points where M(n) is the run-time of your matrix multiplication algorithm.
The paper also gives the method for computing the actual paths (in the same run-time) if you need this too.
Seidel's algorithm is cool because the run-time is independent of the number of edges, but we actually don't care here because our graph is sparse. However, this may still be a good choice (despite the slightly-worse-than n^2 run-time) if you want the all pairs distance matrix, and this might also be easier to implement and debug than floodfill on a maze.
Here is the pseudocode:
Let A be the nxn (0-1) adjacency matrix of an unweighted, undirected graph, G
All-Pairs-Distances(A)
Z = A * A
Let B be the nxn matrix s.t. b_ij = 1 iff i != j and (a_ij = 1 or z_ij > 0)
if b_ij = 1 for all i != j return 2B - A //base case
T = All-Pairs-Distances(B)
X = T * A
Let D be the nxn matrix s.t. d_ij = 2t_ij if x_ij >= t_ij * degree(j), otherwise d_ij = 2t_ij - 1
return D
To get the pair of points with the greatest distance we just return argmax_ij(d_ij)
Finished a python mockup of the dijkstra solution to the problem.
Code got a bit long so I posted it somewhere else: http://refactormycode.com/codes/717-dijkstra-to-find-two-points-furthest-away-from-each-other
In the size I set, it takes about 1.5 seconds to run the algorithm for one node. Running it for every node takes a few minutes.
Dont seem to work though, it always displays the topleft and bottomright corner as the longest path; 58 tiles. Which of course is true, when you dont have obstacles. But even adding a couple of randomly placed ones, the program still finds that one the longest. Maybe its still true, hard to test without more advanced shapes.
But maybe it can at least show my ambition.
Ok, "Hosam's algorithm" is a breadth first search with a preselection on the nodes.
Dijkstra's algorithm should NOT be applied here, because your edges don't have weights.
The difference is crucial, because if the weights of the edges vary, you need to keep a lot of options (alternate routes) open and check them with every step. This makes the algorithm more complex.
With the breadth first search, you simply explore all edges once in a way that garantuees that you find the shortest path to each node. i.e. by exploring the edges in the order you find them.
So basically the difference is Dijkstra's has to 'backtrack' and look at edges it has explored before to make sure it is following the shortest route, while the breadth first search always knows it is following the shortest route.
Also, in a maze the points on the outer border are not guaranteed to be part of the longest route.
For instance, if you have a maze in the shape of a giant spiral, but with the outer end going back to the middle, you could have two points one at the heart of the spiral and the other in the end of the spiral, both in the middle!
So, a good way to do this is to use a breadth first search from every point, but remove the starting point after a search (you already know all the routes to and from it).
Complexity of breadth first is O(n), where n = |V|+|E|. We do this once for every node in V, so it becomes O(n^2).
Your description sounds to me like a maze routing problem. Check out the Lee Algorithm. Books about place-and-route problems in VLSI design may help you - Sherwani's "Algorithms for VLSI Physical Design Automation" is good, and you may find VLSI Physical Design Automation by Sait and Youssef useful (and cheaper in its Google version...)
If your objects (points) do not move frequently you can perform such a calculation in a much shorter than O(n^3) time.
All you need is to break the space into large grids and pre-calculate the inter-grid distance. Then selecting point pairs that occupy most distant grids is a matter of simple table lookup. In the average case you will need to pair-wise check only a small set of objects.
This solution works if the distance metrics are continuous. Thus if, for example there are many barriers in the map (as in mazes), this method might fail.

Resources