Count distinct rectangular grid mazes with acyclical paths for given size - algorithm

I was trying to solve the following problem:
An mn maze is an mn rectangular grid with walls placed between grid cells such that there is exactly one path from the top-left square to any other square.
The following are examples of a 912 maze and a 1520 maze:
Let C(m,n) be the number of distinct mn mazes. Mazes which can be formed by rotation and reflection from another maze are considered distinct.
It can be verified that C(1,1) = 1, C(2,2) = 4, C(3,4) = 2415, and C(9,12) = 2.5720e46 (in scientific notation rounded to 5 significant digits).
Find C(100,500)
Now, there is an explicit formula which gives the right result, and it is perfectly computable. However, as I understand, the solutions to Project Euler problems should be more like clever algorithms and not explicit formula computations. Trying to formulate the solution as a recursion, I could only arrive at a linear system with number of variables growing exponentially with the size of the maze (more precisely, if one tries to write a recursion for the number of mxn mazes with m held fixed, one arrives at a linear system such that the number of its variables grows exponentially with m: one of the variables is the number of mxn mazes with the property given in the declaration of problem 380, while the other variables are numbers of mxn mazes with more than one connected component which touch the boundary of the maze in some specific "configuration" - and the number of such "configurations" seems to grow exponentially with m. So, while this approach is feasible with m=2,3,4 etc, it does not seem to work with m=100).
I thought also to reduce the problem to subproblems which can be solved more easily,
then reusing the subproblems solutions when constructing a solution to larger subproblems(the dynamic programming approach), but here I stumbled upon the fact that subproblems seem to involve mazes of irregular shapes, and again, the number of such mazes is exponential in m,n.
If someone knows of a feasible approach (m=100, n=500) other than using explicit formulas or some ad hoc theorems, and can hint where to look, for me it would be quite interesting.

This is basically a spanning tree counting problem. Specifically, it is counting the number of spanning trees in a grid graph.
Counting Spanning Trees in a Grid Graph
From the "Counting spanning trees" section of the Wikipedia entry:
The number t(G) of spanning trees of a connected graph is a
well-studied invariant. In some cases, it is easy to calculate t(G)
directly. For example, if G is itself a tree, then t(G)=1, while if G
is the cycle graph C_n with n vertices, then t(G)=n. For any graph G,
the number t(G) can be calculated using Kirchhoff's matrix-tree theorem...
Related Algorithms
Here are a few papers or posts related to counting the number of spanning trees in grid graphs:
"Counting Spanning Trees in Grid Graphs", Melissa Desjarlais and Robert Molina
Department of Mathematics and Computer Science, Alma College, August 17, 2012? (publish date uncertain)
"Counting the number of spanning trees in a graph - A spectral approach", from Univ. of Maryland class notes for CMSC858W: Algorithms for Biosequence Analysis,
April 29th, 2010
"Automatic Generation of Generating Functions for Counting the Number of Spanning Trees for Grid Graphs (and more general creatures) of Fixed (but arbitrary!) Width", by Shalosh B. Ekhad and Doron Zeilberger
The latter by Ekhad and Zeilberger provided the following, with answers that matched up with the problem-at-hand:
If you want to see explicit expressions (as rational functions in z)
for the formal power series whose coefficient of zn in its Maclaurin
expansion (with respect to z) would give you the number of spanning
trees of the m by n grid graph (the Cartesian product of a path of m
vertices and a path of length n) for m=2 to m=6, the the input
gives the output.
Specifically, see the output.
Sidenote: Without the provided solution values that suggest otherwise, a valid interpretation could be that the external structure of the maze is important. Two or more mazes with identical paths would be different and distinct in this case, as there could be 3 options for entering and exiting a maze on a corner, where the top left corner would be open at top, top left corner open on left, or open on both left and top, and similar for a corner exit. If trying to represent these maze possibilities as a tree, two nodes may converge on entry rather than just diverging from start to finish, and there would be one or more additional nodes for exit possibilities. This would increase the value of C(m,n).

The insight here comes from the question (emphasis mine)
A .. maze is a rectangular grid with walls placed between grid cells such that there is exactly one path from the top-left square to any other square.
If you think of the dual of the maze, i.e. the spaces one can occupy, it is clear that a maze must form a graph. Not just any graph either, for there to be a singular path the graph must not contain any cycles which makes it a tree. This reduction to a combinatorics problem suggests an algorithm. In the spirit of Project Euler, the rest is left as an exercise to the reader.

SPOILER AHEAD
I was wrong, stating in one of the comments that "Now, there is a general theorem about spanning trees in a graph, but it does not seem to give a computationally feasible way to compute the number sought". The "general theorem", being the Matrix-Tree theorem, attributed to Kirchhoff, and referred to in one of the answers here, gives the result not only as the product of the nonzero eigenvalues of the graph Laplacian divided by the order of the graph, but also as the absolute value of any cofactor of the Laplacian, which in this case is the absolute value of the determinant of a 49999x49999 matrix. But, although the matrix is very sparse, it still looked to me out of reach.
However, the reference
http://arxiv.org/pdf/0712.0681.pdf
("Determinants of block tridiagonal matrices", by Luca Guido Molinari),
permitted to reduce the problem to the evaluation of the determinant of an integer 100x100 dense matrix, having very large integers as its entries.
Further, the reference
http://www.ams.org/journals/mcom/1968-22-103/S0025-5718-1968-0226829-0/S0025-5718-1968-0226829-0.pdf
by Erwin H. Bareiss (usually one just speaks of "Bareiss algorithm", but the recursion which I used and which is referred to as formula (8) in the reference, seems to be due to Charles Dodgson, a.k.a. Lewis Carroll :) ), perimitted me then to evaluate this last determinant and thus to obtain the answer to the original problem.

I would say that finding a explicit formula is a correct way to solve an Euler problem. It will be fast, it can be scaled. Just go for it :)

Related

Rearranging a graph so that certain nodes are not adjacent?

EDIT: Precisely, I am trying to find two disjoint independent sets of known size in a graph shaped like a triangular grid, which may have holes and has a variable perimeter shape.
I'm not very well versed in graph theory, so I'm not sure if there exists an efficient solution for this problem. Consider the following graphs:
The colors of any two nodes can be swapped. The goal is to ensure that no two red nodes are adjacent, and no two green nodes are adjacent. The edges marked with exclamation points are invalid. Basically, I need to write two algorithms:
Determine that the nodes in a given graph can be arranged so that red and green nodes are not adjacent to nodes of the same color.
Actually rearrange the nodes.
I'm a little lost on how to implement this. It's not too difficult to separate the nodes of one color, but repeating the process for the second color may mess up the first color. Without a way to determine whether the graph can actually be arranged properly, this process could loop forever.
Is there some kind of algorithm that I can use/write for this? I'm mainly interested in the first image's graph (a triangular grid), but a generic algorithm would work as well.
First, let's note that the problem is a variant of graph coloring.
Now, if you only dealing with 2 colors (red,green) - coloring a graph with 2 colors is fairly easy, and is basically done by finding out if the graph is bipartite, and coloring each "side" of the graph in one color. Finding if a graph is bipartite is fairly simple.
However, if you want more than two colors, the problem becomes NP-Complete, and is actually a variant of the Graph Coloring Problem.
Graph Coloring Problem:
Given a graph G=(V,E) and a number k determine if there is a
function c:V->{1,2.,,,.k} such that c(v) = v(u) -> (v,u) is not an
edge.
Informally, you can color the graph in k colors, and you need to determine if there is some coloring such that you never color 2 nodes that share an edge with the same color.
Note that while it seems your problem is slightly easier, since you already know what is the number of nodes in each color, it doesn't really make a difference.
Assume you have a polynomial time algorithm A that solves your problem.
Now, given an instance (G,k) of graph coloring - there are only O(n^3) possibilities to #color1,#color2,#color3 - so by examining each of these and invoking A on it, you can find a polynomial time solution to Graph-Coloring. This will mean P=NP, which is most likely (according to most CS researchers) not the case.
tl;dr:
For 2 colors: find out if the graph is bipartite - and give one color to each side of the graph.
For 3 or more colors: There is no known efficient solution, and the general belief is one does not exist.
I thought this problem would be easier for planar graph, but unfortunately it's not the case. Best match for this problem I was able to find is minimum sum coloring and largest bipartite subgraph.
For largest bipartite subgraph, assume that number of reds + number of greens exactly match the size of largest bipartite subgraph. Then this problem is equivalent to your. Paper claims that it's still NP-hard even for planar graphs.
For minimum sum coloring, assume that red color has weight 1, green color has color 2, and we have infinitely many blue* colors with some weight of >graph size. Then if answer is exactly minimal sum coloring, there is no polynomial algorithm to find it (although paper referes to such algorithm for chordal graphs).
Anyway, it seems that the closer your red+green count to the 'optimal' in some sense subgraph, the more difficult problem is.
If you can afford inexact solution, or relaxed solution then you only spearate, say, reds, you have an option. As I said in comment, approximate solution of maximum independent set problem for planar graph. Then color this set into red and blue colors, if it is large enough.
If you know that red+green is much less than total number of vertices, another approximation can work. Look at introduction chapter of this article. It claims that:
For graphs which are promised to have small chromatic number, a better
guarantee is known: given a k-colorable graph, an algorithm due to
Karger et al. [12] uses semidefinite programming (SDP) to find an
independent set of size about Ω(n/∆^(1−2/k)).
Your graph is for sure 4-colorable, so you can count on large enough independent set. The same article states that greedy solution already can find large enough independent set.

Why do these maze generation algorithms produce mazes with different properties?

I was browsing the Wikipedia entry on maze generation algorithms and found that the article strongly insinuated that different maze generation algorithms (randomized depth-first search, randomized Kruskal's, etc.) produce mazes with different characteristics. This seems to suggest that the algorithms produce random mazes with different probability distributions over the set of all single-solution mazes (spanning trees on a rectangular grid).
My questions are:
Is this correct? That is, am I reading this article correctly, and is the article correct?
If so, why? I don't see an intuitive reason why the different algorithms would produce different distributions.
Uh well I think it's pretty obvious different algorithms generate different mazes. Let's just talk about spanning trees of a grid. Suppose you have a grid G and you have two algorithms to generate a spanning tree for the grid:
Algorithm A:
Pick any edge of the grid, with 99% probability choose a horizontal one, otherwise a vertical one
Add the edge to the maze, unless adding it would create a cycle
Stop when every vertex is connected to every other vertex (spanning tree complete)
Algorithm B:
As algorithm A, but set the probability to 1% instead of 99%
"Obviously" algorithm A produces mazes with lots of horizontal passages and algorithm B mazes with lots of vertical passages. That is, there is a statistical correlation between the number of horizontal passages in a maze and the maze being produced by algorithm A.
Of course the differences between the Wikipedia algorithms are more intricate but the principle is the same. The algorithms sample the space of possible mazes for a given grid in a non-uniform, structured way.
LOL I remember a scientific conference where a researcher presented her results about her algorithm that did something "for graphs". The results were statistical and presented for "random graphs". Someone asked from the audience "which distribution of random graphs did you draw the graphs from?" The answer: "uh... they were produced by our graph generation program". Duh!
Interesting question. Here my random 2c.
Comparing Prim's to, say, DFS, the latter seems to have a proclivity for producing deeper trees simply due to the fact that the first 'runs' have more space to create deep trees with less branches. Prim's algorithm, on the other hand, appears to create trees with more branching due to the fact that any open branch can be selected at each iteration.
One way to ask this would be to look at what is the probability that each algorithm will produce a tree of depth > N. I have a hunch that they would be different. A more formal approach to do proving this might be to assign some weights to each part of the tree and show it's more likely to be taken or attempt to characterize the space some other way, but I'll be hand wavy and guessing it's correct :). I'm interested in what lead to you think it wouldn't be, because my intuition was the opposite. And no, the Wiki article doesn't give a convincing argument.
EDIT
One simple way to see this to consider an initial tree with two children with a total of k nodes
e.g.,
*---* ... *
\--* ... *
Choose a random node as the start and end. DFS will produce one of two mazes, either the entire tree, or the part of it with the direct path from start to end. Prim's algorithm will produce the 'maze' with the direct path from start to end with secondary paths of length 1 ... k.
It is not statistical until you request that each algorithm produce every solution it can.
What you are perceiving as statistical bias is only a bias towards the preferred, first solution.
That bias may not be algorithmic (set-theory-wise) but implementation dependent (like the bias in the choice of the pivot in quicksort).
Yes, it is correct. You can produce different mazes by starting the process in different ways. Some algorithms start with a fully closed grid and remove walls to generate a path through the maze while some start with a empty grid and add walls leaving behind a path. This alone can produce different results.

Find the extreme for priority function / alphabet order

We have an array of elements a1,a2,...aN from an alphabet E. Assuming |N| >> |E|.
For each symbol of the alphabet we define an unique integer priority = V(sym). Let's define V{i} := V(symbol(ai)) for the simplicity.
How can I find the priority function V for which:
Count(i)->MAX | V{i} < V{i+1}
In other words, I need to find the priorities / permutation of the alphabet for which the number of positions i, satisfying the condition V{i}<V{i+1}, is maximum.
Edit-1: Please, read carefully. I'm given an array ai and the task is to produce a function V. It's not about sorting the input array with a priority function.
Edit-2: Example
E = {a,b,c}; A = 'abcab$'; (here $ = artificial termination symbol, V{$}=+infinity)
One of the optimal priority functions is: V{a}=1,V{b}=2,V{c}=3, which gives us the following signs between array elements: a<b<c>a<b<$, resulting in 4 '<' signs of 5 total.
If elements could not have tied priorities, this would be trivial. Just sort by priority. But you can have equal priorities.
I would first sort the alphabet by priority. Then I'd extract the longest rising subsequence. That is the start of your answer. Extract the longest rising subsequence from what remains. Append that to your answer. Repeat the extraction process until the entire alphabet has been extracted.
I believe that this gives an optimal result, but I haven't tried to prove it. If it is not perfectly optimal, it still will be pretty good.
Now that I think I understand the problem, I can tell you that there is no good algorithm to solve it.
To see this let us first construct a directed graph whose vertices are your elements, and whose edges indicate how many times one element immediately preceeded another. You can create a priority function by dropping enough edges to get a directed acyclic graph, use the edges to create a partially ordered set, and then add order relations until you have a full linear order, from which you can trivially get a priority function. All of this is straightforward once you have figured out which edges to drop. Conversely given that directed graph and your final priority function, it is easy to figure out which set of edges you had to decide to drop.
Therefore your problem is entirely equivalent to figuring out a minimal set of edges you need to drop from athat directed graph to get athat directed acyclic graph. However as http://en.wikipedia.org/wiki/Feedback_arc_set says, this is a known NP hard problem called the minimum feedback arc set. begin update It is therefore very unlikely that there is a good algorithm for the graphs you can come up with. end update
If you need to solve it in practice, I'd suggest going for some sort of greedy algorithm. It won't always be right, but it will generally give somewhat reasonable results in reasonable time.
Update: Moron is correct, I did not prove NP-hard. However there are good heuristic reasons to believe that the problem is, in fact, NP-hard. See the comments for more.
There's a trivial reduction from Minimum Feedback Arc Set on directed graphs whose arcs can be arranged into an Eulerian path. I quote from http://www14.informatik.tu-muenchen.de/personen/jacob/Publications/JMMN07.pdf :
To the best of our knowledge, the
complexity status of minimum feedback
arc set in such graphs is open.
However, by a lemma of Newman, Chen,
and Lovász [1, Theorem 4], a
polynomial algorithm for [this problem]
would lead to a 16/9 approximation
algorithm for the general minimum
feedback arc set problem, improving
over the currently best known O(log n
log log n) algorithm [2].
Newman, A.: The maximum acyclic subgraph problem and degree-3 graphs.
In: Proceedings of the 4th
International Workshop on
Approximation Algorithms for
Combinatorial Optimization Problems,
APPROX. LNCS (2001) 147–158
Even,G.,Naor,J.,Schieber,B.,Sudan,M.:Approximatingminimumfeedbacksets
and multi-cuts in directed graphs. In:
Proceedings of the 4th International
Con- ference on Integer Programming
and Combinatorial Optimization. LNCS
(1995) 14–28

How to assign consecutive numbers to nodes of directed graph?

There's a graph with a lot of nodes, and very few edges between them - the problem is assigning numbers to nodes, so that most nodes are from i to i+1 or otherwise close.
My problem is about printing graph data nicely, but an algorithm just like that is part of pretty much every compiler (intermediate code is just a graph, produced object code gets memory locations).
I thought it was just straightforward depth-first search, but results of that aren't that great - it seems to minimize number of links back well enough, but ones it leaves tend to be horrible (like 1 -> 500 -> 1).
Any better ideas?
This paper discusses this problem, if you use Eyal Schneider's formulation of minimizing the sum of the edge deltas (absolute value of the difference between the endpoints' labels). It's under #2, Optimal Linear Arrangements.
Sadly, there's no algorithm given for achieving an optimal ordering (or labeling), and the general problem is NP-complete. There are references to some polynomial-time algorithms for trees, though.
If you want to get into the academic stuff, google gives lots of hits for "Optimal Linear Arrangements".

Algorithm to find two points furthest away from each other

Im looking for an algorithm to be used in a racing game Im making. The map/level/track is randomly generated so I need to find two locations, start and goal, that makes use of the most of the map.
The algorithm is to work inside a two dimensional space
From each point, one can only traverse to the next point in four directions; up, down, left, right
Points can only be either blocked or nonblocked, only nonblocked points can be traversed
Regarding the calculation of distance, it should not be the "bird path" for a lack of a better word. The path between A and B should be longer if there is a wall (or other blocking area) between them.
Im unsure on where to start, comments are very welcome and proposed solutions are preferred in pseudo code.
Edit: Right. After looking through gs's code I gave it another shot. Instead of python, I this time wrote it in C++. But still, even after reading up on Dijkstras algorithm, the floodfill and Hosam Alys solution, I fail to spot any crucial difference. My code still works, but not as fast as you seem to be getting yours to run. Full source is on pastie. The only interesting lines (I guess) is the Dijkstra variant itself on lines 78-118.
But speed is not the main issue here. I would really appreciate the help if someone would be kind enough to point out the differences in the algorithms.
In Hosam Alys algorithm, is the only difference that he scans from the borders instead of every node?
In Dijkstras you keep track and overwrite the distance walked, but not in floodfill, but thats about it?
Assuming the map is rectangular, you can loop over all border points, and start a flood fill to find the most distant point from the starting point:
bestSolution = { start: (0,0), end: (0,0), distance: 0 };
for each point p on the border
flood-fill all points in the map to find the most distant point
if newDistance > bestSolution.distance
bestSolution = { p, distantP, newDistance }
end if
end loop
I guess this would be in O(n^2). If I am not mistaken, it's (L+W) * 2 * (L*W) * 4, where L is the length and W is the width of the map, (L+W) * 2 represents the number of border points over the perimeter, (L*W) is the number of points, and 4 is the assumption that flood-fill would access a point a maximum of 4 times (from all directions). Since n is equivalent to the number of points, this is equivalent to (L + W) * 8 * n, which should be better than O(n2). (If the map is square, the order would be O(16n1.5).)
Update: as per the comments, since the map is more of a maze (than one with simple obstacles as I was thinking initially), you could make the same logic above, but checking all points in the map (as opposed to points on the border only). This should be in order of O(4n2), which is still better than both F-W and Dijkstra's.
Note: Flood filling is more suitable for this problem, since all vertices are directly connected through only 4 borders. A breadth first traversal of the map can yield results relatively quickly (in just O(n)). I am assuming that each point may be checked in the flood fill from each of its 4 neighbors, thus the coefficient in the formulas above.
Update 2: I am thankful for all the positive feedback I have received regarding this algorithm. Special thanks to #Georg for his review.
P.S. Any comments or corrections are welcome.
Follow up to the question about Floyd-Warshall or the simple algorithm of Hosam Aly:
I created a test program which can use both methods. Those are the files:
maze creator
find longest distance
In all test cases Floyd-Warshall was by a great magnitude slower, probably this is because of the very limited amount of edges that help this algorithm to achieve this.
These were the times, each time the field was quadruplet and 3 out of 10 fields were an obstacle.
Size Hosam Aly Floyd-Warshall
(10x10) 0m0.002s 0m0.007s
(20x20) 0m0.009s 0m0.307s
(40x40) 0m0.166s 0m22.052s
(80x80) 0m2.753s -
(160x160) 0m48.028s -
The time of Hosam Aly seems to be quadratic, therefore I'd recommend using that algorithm.
Also the memory consumption by Floyd-Warshall is n2, clearly more than needed.
If you have any idea why Floyd-Warshall is so slow, please leave a comment or edit this post.
PS: I haven't written C or C++ in a long time, I hope I haven't made too many mistakes.
It sounds like what you want is the end points separated by the graph diameter. A fairly good and easy to compute approximation is to pick a random point, find the farthest point from that, and then find the farthest point from there. These last two points should be close to maximally separated.
For a rectangular maze, this means that two flood fills should get you a pretty good pair of starting and ending points.
I deleted my original post recommending the Floyd-Warshall algorithm. :(
gs did a realistic benchmark and guess what, F-W is substantially slower than Hosam Aly's "flood fill" algorithm for typical map sizes! So even though F-W is a cool algorithm and much faster than Dijkstra's for dense graphs, I can't recommend it anymore for the OP's problem, which involves very sparse graphs (each vertex has only 4 edges).
For the record:
An efficient implementation of Dijkstra's algorithm takes O(Elog V) time for a graph with E edges and V vertices.
Hosam Aly's "flood fill" is a breadth first search, which is O(V). This can be thought of as a special case of Dijkstra's algorithm in which no vertex can have its distance estimate revised.
The Floyd-Warshall algorithm takes O(V^3) time, is very easy to code, and is still the fastest for dense graphs (those graphs where vertices are typically connected to many other vertices). But it's not the right choice for the OP's task, which involves very sparse graphs.
Raimund Seidel gives a simple method using matrix multiplication to compute the all-pairs distance matrix on an unweighted, undirected graph (which is exactly what you want) in the first section of his paper On the All-Pairs-Shortest-Path Problem in Unweighted Undirected Graphs
[pdf].
The input is the adjacency matrix and the output is the all-pairs shortest-path distance matrix. The run-time is O(M(n)*log(n)) for n points where M(n) is the run-time of your matrix multiplication algorithm.
The paper also gives the method for computing the actual paths (in the same run-time) if you need this too.
Seidel's algorithm is cool because the run-time is independent of the number of edges, but we actually don't care here because our graph is sparse. However, this may still be a good choice (despite the slightly-worse-than n^2 run-time) if you want the all pairs distance matrix, and this might also be easier to implement and debug than floodfill on a maze.
Here is the pseudocode:
Let A be the nxn (0-1) adjacency matrix of an unweighted, undirected graph, G
All-Pairs-Distances(A)
Z = A * A
Let B be the nxn matrix s.t. b_ij = 1 iff i != j and (a_ij = 1 or z_ij > 0)
if b_ij = 1 for all i != j return 2B - A //base case
T = All-Pairs-Distances(B)
X = T * A
Let D be the nxn matrix s.t. d_ij = 2t_ij if x_ij >= t_ij * degree(j), otherwise d_ij = 2t_ij - 1
return D
To get the pair of points with the greatest distance we just return argmax_ij(d_ij)
Finished a python mockup of the dijkstra solution to the problem.
Code got a bit long so I posted it somewhere else: http://refactormycode.com/codes/717-dijkstra-to-find-two-points-furthest-away-from-each-other
In the size I set, it takes about 1.5 seconds to run the algorithm for one node. Running it for every node takes a few minutes.
Dont seem to work though, it always displays the topleft and bottomright corner as the longest path; 58 tiles. Which of course is true, when you dont have obstacles. But even adding a couple of randomly placed ones, the program still finds that one the longest. Maybe its still true, hard to test without more advanced shapes.
But maybe it can at least show my ambition.
Ok, "Hosam's algorithm" is a breadth first search with a preselection on the nodes.
Dijkstra's algorithm should NOT be applied here, because your edges don't have weights.
The difference is crucial, because if the weights of the edges vary, you need to keep a lot of options (alternate routes) open and check them with every step. This makes the algorithm more complex.
With the breadth first search, you simply explore all edges once in a way that garantuees that you find the shortest path to each node. i.e. by exploring the edges in the order you find them.
So basically the difference is Dijkstra's has to 'backtrack' and look at edges it has explored before to make sure it is following the shortest route, while the breadth first search always knows it is following the shortest route.
Also, in a maze the points on the outer border are not guaranteed to be part of the longest route.
For instance, if you have a maze in the shape of a giant spiral, but with the outer end going back to the middle, you could have two points one at the heart of the spiral and the other in the end of the spiral, both in the middle!
So, a good way to do this is to use a breadth first search from every point, but remove the starting point after a search (you already know all the routes to and from it).
Complexity of breadth first is O(n), where n = |V|+|E|. We do this once for every node in V, so it becomes O(n^2).
Your description sounds to me like a maze routing problem. Check out the Lee Algorithm. Books about place-and-route problems in VLSI design may help you - Sherwani's "Algorithms for VLSI Physical Design Automation" is good, and you may find VLSI Physical Design Automation by Sait and Youssef useful (and cheaper in its Google version...)
If your objects (points) do not move frequently you can perform such a calculation in a much shorter than O(n^3) time.
All you need is to break the space into large grids and pre-calculate the inter-grid distance. Then selecting point pairs that occupy most distant grids is a matter of simple table lookup. In the average case you will need to pair-wise check only a small set of objects.
This solution works if the distance metrics are continuous. Thus if, for example there are many barriers in the map (as in mazes), this method might fail.

Resources