Topological Sorting of a directed acyclic graph - algorithm

How would you output all the possible topological sorts for a directed acyclic graph? For example, given a graph where V points to W and X, W points to Y and Z, and X points to Z:
V --> W --> Y
W --> Z
V --> X --> Z
How do you topologically sort this graph to produce all possible results? I was able to use a breadth-first-search to get V, W, X, Y, Z and a depth-first search to get V, W, Y, Z, X. But wasn't able to output any other sorts.

An algorithm for generating all topological sorts for a given DAG (aka generating all linear extensions of a partial order) is given in the paper "Generating Linear Extensions Fast" by Pruesse and Ruskey. The algorithm has an amortized running time that is linear in the output (e.g.: if it outputs M topological sorts, it runs in time O(M)).
Note that in general you can't really have anything that has a runtime that's efficient with respect to the size of the input since the size of the output can be exponentially larger than the input. For example, a completely disconnected DAG of N nodes has N! possible topological sorts.

It might be possible to count the number of orderings faster, but the only way to actually generate all orderings that I can think of is with a full brute-force recursion. (I say "brute force", but this is still much better than the brutest-possible brute force approach of testing every possible permutation :) )
Basically, at every step there is a set S of vertices remaining (i.e. which have not been added to the order yet), and a subset X of these can be safely added in the next step. This subset X is exactly the set of vertices that have no in-edges from vertices in S.
For a given partial solution L consisting of some number of vertices that are already in the order, the set S of remaining vertices, and the set X of vertices in S that have no in-edges from other vertices in S, the call Generate(L, X, S) will generate all valid topological orders beginning with L.
Generate(L, X, S):
If X is empty:
Either L is already a complete solution, in which case it contains all n vertices and S is also empty, or the original graph contains a cycle.
If S is empty:
Output L as a solution.
Otherwise:
Report that a cycle exists. (In fact, all vertices in S participate in some cycle, though there may be more than one.)
Otherwise:
For each x in X:
Let L' be L with x added to the end.
Let X' be X\{x} plus any vertices whose only in-edge among vertices in S came from x.
Let S' = S\{x}.
Generate(L', X', S')
To kick things off, find the set X of all vertices having no in-edges and call Generate((), X, V). Because every x chosen in the "For each" loop is different, every partial solution L' generated by the iterations of this loop must also be distinct, so no solution is generated more than once by any call to Generate(), including the top-level call.
In practice, forming X' can be done more efficiently than the above pseudocode suggests: When we choose x, we can delete all out-edges from x, but also add them to a temporary list of edges, and by tracking the total number of in-edges for each vertex (e.g. in an array indexed by vertex number) we can efficiently detect which vertices now have 0 in-edges and should thus be added to X'. Then at the end of the loop iteration, all the edges that we deleted can be restored from the temporary list.

So this approach is flawed! Unsure if it can be salvaged, I'll leave it a little while, if anyone can spot how to fix it, either grab what you can and post a new answer or edit mine.
Specifically, I used the below algorithm on the example from the comment and it will not output the example given, so it is clearly flawed.
The way I've learned to do a topological sort is the following:
Create a list of all the elements with no arrows pointing into it
Create a dictionary of element -> number, where element here is any element in the original collection that has an arrow into it, and the number is how many elements point to it.
Create a dictionary of element -> list, where element here is any element in the original collection that has an arrow out of it, and the list is all the elements those arrows point to
In your example, the two dictionaries and the list would be like this:
D1 D2 List
W: 1 V: W, X V
Y: 1 W: Y, Z
Z: 2 X: Z
X: 1
Then, start a loop where on each iteration you do the following:
Output all elements of the list, these currently have no arrows pointing into them. Make a temporary copy of the list, and clear the list, preparing it for the following iteration
Loop through the temporary copy, and find each element (if it exists) in the dictionary that is element -> list
For each element in those lists, decrement the corresponding number in the element -> number dictionary by 1 (removing 1 arrow). Once a number for an element here reaches 0, add that element to the list (it has no arrows left)
If the list is non-empty, redo the iteration loop
If you reach this point, and the dictionary with element -> number still has any elements left in it with a number above 0 (if you want to, you can remove the elements as you go in the above iteration once their numbers reach zero to make this part easier), then you have a cycle, since the above loop should not terminate until all arrows have been removed.
For your example, each iteration would output the following:
V
W, X (2nd iteration output both W and X)
Y, Z
If you want to know how I arrived at this solution, simply go through my iteration description step by step using the above dictionaries and list as the starting point.
Now, to specifically answer your question, how to output all combinations. The only places where "combinations" comes into play is per iteration. Basically, all the elements that you output in the first step of the iteration (the ones you made a temporary copy of) are considered "equivalent" and any internal ordering between these would have no impact on the topological sort.
So, do this:
In the first point in the iteration, place those elements into a list, and add that to another list, giving you a list of lists
This lists of lists will now contain each iteration as one element, and one element will be yet another list with the elements output in that iteration
Now, combine all permutations of the first list with all the permutations of the second list with all the permutations of the third list, and so on
This means taking this output:
V
W, X
Y, Z
Which gives you 1 * 2 * 2 = 4 permutations in total and you would combine all permutations of the 1st iteration (which is 1) with all the permutations of the 2nd iteration (which is 2, W, X and X, W) with all the permutations of the 3rd iteration (which is 2, Y, Z and Z, Y).
The final list of permutations that are valid topological sorts would be this:
V, W, X, Y, Z
V, X, W, Y, Z
V, W, X, Z, Y
V, X, W, Z, Y
Here is the example from the comment:
A and B with no in-edges. Both A and B have an edge to C, but only A has an edge to D. Neither C nor D has any out-edges.
Which gives:
A --> C
A --> D
B --> C
Dictionaries and list:
D1 D2 List
C: 2 A: C, D A
D: 1 B: C B
Iterations would output:
A, B
D, C
All permutations (2 * 2 = 4):
A, B, D, C
A, B, C, D
B, A, D, C
B, A, C, D

Related

data structure for representing non square board game

I am trying to design a board game, which is not square. Here we have 2 different types of pieces(attacker & defenders). Both can move to any adjacent free intersection. The attacker player can also jump on defender player if there is empty space in the adjacent intersection in the same line. Considering these cases i can think of storing the board as array.
But this is not a right choice, as i need to harcode attackable position from each indexes. need your suggestions for designing this board.
One more option is using graph and maintain the direction of the node(Left,Right,Top, Down), but this would require 3 Down nodes on the top vertex of the board.
The two times I had to do this, I created a play_line data type. I had the canonical graph with nodes and edges; a play_line is a sequence of edges.
A legal move to a free, adjacent intersection is a trivial property from the graph alone. A piece A at node m can move to any node n where
edge (m,n) exists
node n is empty
A jump from m, over n, to p exists where
edge (m,n) exists
edge (n,p) exists
node n contains a piece D (defender)
node p is empty
Along the play_line containing edge (m,n), (n,p) is the next edge on that line.
Does that help?
Update after OP comment
There is nothing to maintain for the play_line objects, as the lines of play do not change once initialized. These are hard-coded from the game board, an enhancement of the graph. For instance if the board is labeled
a
b c d e f
g h i j k
l m n o p
q r s
then the first full row is a line of play containing five nodes in order, [b, c, d, e, f]. There are corresponding graph edges (correct by construction) of (b, c), (c, d), (d, e), (e, f). Note that your code must either traverse this in either direction, or you make a second play_line in the reverse order.

Finding the biggest subset of elements that does not correlate

I have a set of integers and I want to find the largest subset in which the elements does not correlate with each other in a specific way. For example a subset in which if any of the elements is multiplied by 13 the result is not in the subset.
My first thought is to iterate through all the possible subsets, filter out these that don't meet the condition and then find the largest one, but this is too slow and I don't know how to generate all possible subsets.
I'll be answering this one (from comments). In general there's no good solution for any "correlation"
Relationship is the following : if you multiple any of the elements in the subset by some number the resulting number does not have to be in the subset.
If your number is m
You can generate all chains x, x*m, x*m*m, ...., such that all number in chain are in the set, x/m is not
Remove every second element, i.e x*m^2, x*m^4 from original set. Elements left are your target set.
A better way is to build a graph and find the vertexes with most edges and remove them until you get rid of all edges. Complexity is about O(N^2).
Here is a detailed algorithm:
for each possible pair (x, y) from the source set
begin
if x = y * 13 or y = x * 13 then make edge between x and y
end
while graph has edges
begin
let V = find: a vertex with maximum count of edges (it can be 1 or 2)
remove V from the graph
end
result: the remaining vertexes in the graph

Merging sorted lists without comparison key

Say we have the following lists:
list 1: x, y, z
list 2: w, x
list 3: u
And we want to merge them such that the order among each individual list is respected. A solution for the above problem might be w, x, y, z, u.
This problem is easy if we have a comparison key (e.g. string comparison; a < z), as this gives us a reference to any element's position relative to other elements in the combined list. But what about the case when we don't have a key? For the above problem, we could restate the problem as follows:
x < y AND y < z AND w < x where x, y, z, w, u are in {0, 1, 2, 3, 4}
The way I'm currently solving this type of problem is to model the problem as a constraint satisfaction problem -- I run the AC3 arc consistency algorithm to eliminate inconsistent values, and then run a recursive backtracking algorithm to make the assignments. This works fine, but it seems like overkill.
Is there a general algorithm or simpler approach to confront this type of problem?
Construct a graph with a node for every letter in your lists.
x y z
w u
Add a directed edge from letter X to letter Y for every pair of consecutive letters in any list.
x -> y -> z
^
|
w u
Topologically sort the graph nodes to obtain a final list that satisfies all your constraints.
If there were ever a cycle in your graph, the topological sorting algorithm would detect that cycle, revealing a contradiction in the constraints induced by your original lists.

What is the meaning of "from distinct vertex chains" in this nearest neighbor algorithm?

The following pseudo-code is from the first chapter of an online preview version of The Algorithm Design Manual (page 7 from this PDF).
The example is of a flawed algorithm, but I still really want to understand it:
[...] A different idea might be to repeatedly connect the closest pair of
endpoints whose connection will not create a problem, such as
premature termination of the cycle. Each vertex begins as its own
single vertex chain. After merging everything together, we will end up
with a single chain containing all the points in it. Connecting the
final two endpoints gives us a cycle. At any step during the execution
of this closest-pair heuristic, we will have a set of single vertices
and vertex-disjoint chains available to merge. In pseudocode:
ClosestPair(P)
Let n be the number of points in set P.
For i = 1 to n − 1 do
d = ∞
For each pair of endpoints (s, t) from distinct vertex chains
if dist(s, t) ≤ d then sm = s, tm = t, and d = dist(s, t)
Connect (sm, tm) by an edge
Connect the two endpoints by an edge
Please note that sm and tm should be sm and tm.
First of all, I don't understand what "from distinct vertex chains" would mean. Second, i is used as a counter in the outer loop, but i itself is never actually used anywhere! Could someone smarter than me please explain what's really going on here?
This is how I see it, after explanation of Ernest Friedman-Hill (accepted answer):
So the example from the same book (Figure 1.4).
I've added names to the vertices to make it clear
So at first step all the vertices are single vertex chains, so we connect A-D, B-E and C-F pairs, b/c distance between them is the smallest.
At the second step we have 3 chains and distance between A-D and B-E is the same as between B-E and C-F, so we connect let's say A-D with B-E and we left with two chains - A-D-E-B and C-F
At the third step there is the only way to connect them is through B and C, b/c B-C is shorter then B-F, A-F and A-C (remember we consider only endpoints of chains). So we have one chain now A-D-E-B-C-F.
At the last step we connect two endpoints (A and F) to get a cycle.
1) The description states that every vertex always belongs either to a "single-vertex chain" (i.e., it's alone) or it belongs to one other chain; a vertex can only belong to one chain. The algorithm says at each step you select every possible pair of two vertices which are each an endpoint of the respective chain they belong to, and don't already belong to the same chain. Sometimes they'll be singletons; sometimes one or both will already belong to a non-trivial chain, so you'll join two chains.
2) You repeat the loop n times, so that you eventually select every vertex; but yes, the actual iteration count isn't used for anything. All that matters is that you run the loop enough times.
Though question is already answered, here's a python implementation for closest pair heuristic. It starts with every point as a chain, then successively extending chains to build one long chain containing all points.
This algorithm does build a path yet it's not a sequence of robot arm movements for that arm starting point is unknown.
import matplotlib.pyplot as plot
import math
import random
def draw_arrow(axis, p1, p2, rad):
"""draw an arrow connecting point 1 to point 2"""
axis.annotate("",
xy=p2,
xytext=p1,
arrowprops=dict(arrowstyle="-", linewidth=0.8, connectionstyle="arc3,rad=" + str(rad)),)
def closest_pair(points):
distance = lambda c1p, c2p: math.hypot(c1p[0] - c2p[0], c1p[1] - c2p[1])
chains = [[points[i]] for i in range(len(points))]
edges = []
for i in range(len(points)-1):
dmin = float("inf") # infinitely big distance
# test each chain against each other chain
for chain1 in chains:
for chain2 in [item for item in chains if item is not chain1]:
# test each chain1 endpoint against each of chain2 endpoints
for c1ind in [0, len(chain1) - 1]:
for c2ind in [0, len(chain2) - 1]:
dist = distance(chain1[c1ind], chain2[c2ind])
if dist < dmin:
dmin = dist
# remember endpoints as closest pair
chain2link1, chain2link2 = chain1, chain2
point1, point2 = chain1[c1ind], chain2[c2ind]
# connect two closest points
edges.append((point1, point2))
chains.remove(chain2link1)
chains.remove(chain2link2)
if len(chain2link1) > 1:
chain2link1.remove(point1)
if len(chain2link2) > 1:
chain2link2.remove(point2)
linkedchain = chain2link1
linkedchain.extend(chain2link2)
chains.append(linkedchain)
# connect first endpoint to the last one
edges.append((chains[0][0], chains[0][len(chains[0])-1]))
return edges
data = [(0.3, 0.2), (0.3, 0.4), (0.501, 0.4), (0.501, 0.2), (0.702, 0.4), (0.702, 0.2)]
# random.seed()
# data = [(random.uniform(0.01, 0.99), 0.2) for i in range(60)]
edges = closest_pair(data)
# draw path
figure = plot.figure()
axis = figure.add_subplot(111)
plot.scatter([i[0] for i in data], [i[1] for i in data])
nedges = len(edges)
for i in range(nedges - 1):
draw_arrow(axis, edges[i][0], edges[i][1], 0)
# draw last - curved - edge
draw_arrow(axis, edges[nedges-1][0], edges[nedges-1][1], 0.3)
plot.show()
TLDR: Skip to the section "Clarified description of ClosestPair heuristic" below if already familiar with the question asked in this thread and the answers contributed thus far.
Remarks: I started the Algorithm Design Manual recently and the ClosestPair heuristic example bothered me because of what I felt like was a lack of clarity. It looks like others have felt similarly. Unfortunately, the answers provided on this thread didn't quite do it for me--I felt like they were all a bit too vague and hand-wavy for me. But the answers did help nudge me in the direction of what I feel is the correct interpretation of Skiena's.
Problem statement and background: From page 5 of the book for those who don't have it (3rd edition):
Skiena first details how the NearestNeighbor heuristic is incorrect, using the following image to help illustrate his case:
The figure on top illustrates a problem with the approach employed by the NearestNeighbor heuristic, with the bottom figure being the optimal solution. Clearly a different approach is needed to find this optimal solution. Cue the ClosestPair heuristic and the reason for this question.
Book description: The following description of the ClosestPair heuristic is outlined in the book:
Maybe what we need is a different approach for the instance that proved to be a bad instance for the nearest-neighbor heuristic. Always walking to the closest point is too restrictive, since that seems to trap us into making moves we didn't want.
A different idea might repeatedly connect the closest pair of endpoints whose connection will not create a problem, such as premature termination of the cycle. Each vertex begins as its own single vertex chain. After merging everything together, we will end up with a single chain containing all the points in it. Connecting the final two endpoints gives us a cycle. At any step during the execution of this closest-pair heuristic, we will have a set of single vertices and the end of vertex-disjoint chains available to merge. The pseudocode that implements this description appears below.
Clarified description of ClosestPair heuristic
It may help to first "zoom back" a bit and answer the basic question of what we are trying to find in graph theory terms:
What is the shortest closed trail?
That is, we want to find a sequence of edges (e_1, e_2, ..., e_{n-1}) for which there is a sequence of vertices (v_1, v_2, ..., v_n) where v_1 = v_n and all edges are distinct. The edges are weighted, where the weight for each edge is simply the distance between vertices that comprise the edge--we want to minimize the overall weight of whatever closed trails exist.
Practically speaking, the ClosestPair heuristic gives us one of these distinct edges for every iteration of the outer for loop in the pseudocode (lines 3-10), where the inner for loop (lines 5-9) ensures the distinct edge being selected at each step, (s_m, t_m), is comprised of vertices coming from the endpoints of distinct vertex chains; that is, s_m comes from the endpoint of one vertex chain and t_m from the endpoint of another distinct vertex chain. The inner for loop simply ensures we consider all such pairs, minimizing the distance between potential vertices in the process.
Note (ties in distance between vertices): One potential source of confusion is that no sort of "processing order" is specified in either for loop. How do we determine the order in which to compare endpoints and, furthermore, the vertices of those endpoints? It doesn't matter. The nature of the inner for loop makes it clear that, in the case of ties, the most recently encountered vertex pairing with minimal distance is chosen.
Good instance of ClosestPair heuristic
Recall what happened in the bad instance of applying the NearestNeighbor heuristic (observe the newly added vertex labels):
The total distance covered was absurd because we kept jumping back and forth over 0.
Now consider what happens when we use the ClosestPair heuristic. We have n = 7 vertices; hence, the pseudocode indicates that the outer for loop will be executed 6 times. As the book notes, each vertex begins as its own single vertex chain (i.e., each point is a singleton where a singleton is a chain with one endpoint). In our case, given the figure above, how many times will the inner for loop execute? Well, how many ways are there to choose a 2-element subset of an n-element set (i.e., the 2-element subsets represent potential vertex pairings)? There are n choose 2 such subsets:
Since n = 7 in our case, there's a total of 21 possible vertex pairings to investigate. The nature of the figure above makes it clear that (C, D) and (D, E) are the only possible outcomes from the first iteration since the smallest possible distance between vertices in the beginning is 1 and dist(C, D) = dist(D, E) = 1. Which vertices are actually connected to give the first edge, (C, D) or (D, E), is unclear since there is no processing order. Let's assume we encounter vertices D and E last, thus resulting in (D, E) as our first edge.
Now there are 5 more iterations to go and 6 vertex chains to consider: A, B, C, (D, E), F, G.
Note (each iteration eliminates a vertex chain): Each iteration of the outer for loop in the ClosestPair heuristic results in the elimination of a vertex chain. The outer for loop iterations continue until we are left with a single vertex chain comprised of all vertices, where the last step is to connect the two endpoints of this single vertex chain by an edge. More precisely, for a graph G comprised of n vertices, we start with n vertex chains (i.e., each vertex begins as its own single vertex chain). Each iteration of the outer for loop results in connecting two vertices of G in such a way that these vertices come from distinct vertex chains; that is, connecting these vertices results in merging two distinct vertex chains into one, thus decrementing by 1 the total number of vertex chains left to consider. Repeating such a process n - 1 times for a graph that has n vertices results in being left with n - (n - 1) = 1 vertex chain, a single chain containing all the points of G in it. Connecting the final two endpoints gives us a cycle.
One possible depiction of how each iteration looks is as follows:
ClosestPair outer for loop iterations
1: connect D to E # -> dist: 1, chains left (6): A, B, C, (D, E), F, G
2: connect D to C # -> dist: 1, chains left (5): A, B, (C, D, E), F, G
3: connect E to F # -> dist: 3, chains left (4): A, B, (C, D, E, F), G
4: connect C to B # -> dist: 4, chains left (3): A, (B, C, D, E, F), G
5: connect F to G # -> dist: 8, chains left (2): A, (B, C, D, E, F, G)
6: connect B to A # -> dist: 16, single chain: (A, B, C, D, E, F, G)
Final step: connect A and G
Hence, the ClosestPair heuristic does the right thing in this example where previously the NearestNeighbor heuristic did the wrong thing:
Bad instance of ClosestPair heuristic
Consider what the ClosestPair algorithm does on the point set in the figure below (it may help to first try imagining the point set without any edges connecting the vertices):
How can we connect the vertices using ClosestPair? We have n = 6 vertices; thus, the outer for loop will execute 6 - 1 = 5 times, where our first order of business is to investigate the distance between vertices of
total possible pairs. The figure above helps us see that dist(A, D) = dist(B, E) = dist(C, F) = 1 - ɛ are the only possible options in the first iteration since 1 - ɛ is the shortest distance between any two vertices. We arbitrarily choose (A, D) as the first pairing.
Now are there are 4 more iterations to go and 5 vertex chains to consider: (A, D), B, C, E, F. One possible depiction of how each iteration looks is as follows:
ClosestPair outer for loop iterations
1: connect A to D # --> dist: 1-ɛ, chains left (5): (A, D), B, C, E, F
2: connect B to E # --> dist: 1-ɛ, chains left (4): (A, D), (B, E), C, F
3: connect C to F # --> dist: 1-ɛ, chains left (3): (A, D), (B, E), (C, F)
4: connect D to E # --> dist: 1+ɛ, chains left (2): (A, D, E, B), (C, F)
5: connect B to C # --> dist: 1+ɛ, single chain: (A, D, E, B, C, F)
Final step: connect A and F
Note (correctly considering the endpoints to connect from distinct vertex chains): Iterations 1-3 depicted above are fairly uneventful in the sense that we have no other meaningful options to consider. Even once we have the distinct vertex chains (A, D), (B, E), and (C, F), the next choice is similarly uneventful and arbitrary. There are four possibilities given that the smallest possible distance between vertices on the fourth iteration is 1 + ɛ: (A, B), (D, E), (B, C), (E, F). The distance between vertices for all of the points above is 1 + ɛ. The choice of (D, E) is arbitrary. Any of the other three vertex pairings would have worked just as well. But notice what happens during iteration 5--our possible choices for vertex pairings have been tightly narrowed. Specifically, the vertex chains (A, D, E, B) and (C, F), which have endpoints (A, B) and (C, F), respectively, allow for only four possible vertex pairings: (A, C), (A, F), (B, C), (B, F). Even if it may seem obvious, it is worth explicitly noting that neither D nor E were viable vertex candidates above--neither vertex is included in the endpoint, (A, B), of the vertex chain of which they are vertices, namely (A, D, E, B). There is no arbitrary choice at this stage. There are no ties in the distance between vertices in the pairs above. The (B, C) pairing results in the smallest distance between vertices: 1 + ɛ. Once vertices B and C have been connected by an edge, all iterations have been completed and we are left with a single vertex chain: (A, D, E, B, C, F). Connecting A and F gives us a cycle and concludes the process.
The total distance traveled across (A, D, E, B, C, F) is as follows:
The distance above evaluates to 5 - ɛ + √(5ɛ^2 + 6ɛ + 5) as opposed to the total distance traveled by just going around the boundary (the right-hand figure in the image above where all edges are colored in red): 6 + 2ɛ. As ɛ -> 0, we see that 5 + √5 ≈ 7.24 > 6 where 6 was the necessary amount of travel. Hence, we end up traveling about
farther than is necessary by using the ClosestPair heuristic in this case.

Minimum range of 3 sets

We have three sets S1, S2, S3. I need to find x,y,z such that
x E S1
y E S2
z E S3
let min denote the minimum value out of x,y,z
let max denote the maximum value out of x,y,z
The range denoted by max-min should be the MINIMUM possible value
Of course, the full-bruteforce solution described by IVlad is simple and therefore, easier and faster to write, but it's complexity is O(n3).
According to your algorithm tag, I would like to post a more complex algorithm, that has a O(n2) worst case and O(nlogn) average complexity (almost sure about this, but I'm too lazy to make a proof).
Algorithm description
Consider thinking about some abstract (X, Y, Z) tuple. We want to find a tuple that has a minimal distance between it's maximum and minimum element. What we can say at this point is that distance is actually created by our maximum element and minimum element. Therefore, the value of element between them really doesn't matter as long as it really lies between the maximum and the minimum.
So, here is the approach. We allocate some additional set (let's call it S) and combine every initial set (X, Y, Z) into one. We also need an ability to lookup the initial set of every element in the set we've just created (so, if we point to some element in S, let's say S[10] and ask "Where did this guy come from?", our application should answer something like "He comes from Y).
After that, let's sort our new set S by it's keys (this would be O(n log n) or O(n) in some certain cases)
Determining the minimal distance
Now the interesting part comes. What we want to do is to compute some artificial value, let's call it minimal distance and mark it as d[x], where x is some element from S. This value refers to the minimal max - min distance which can be achived using the elements that are predecessors / successors of current element in the sequence.
Consider the following example - this is our S set(first line shows indexes, second - values and letters X, Y and Z refer to initial sets):
0 1 2 3 4 5 6 7
------------------
1 2 4 5 8 10 11 12
Y Z Y X Y Y X Z
Let's say we want to compute that our minimal distance for element with index 4. In fact, that minimal distance means the best (x, y, z) tuple that can be built using the selected element.
In our case (S[4]), we can say that our (x, y, z) pair would definitely look like (something, 8, something), because it should have the element we're counting the distance for (pretty obvious, hehe).
Now, we have to fill the gaps. We know that elements we're seeking for, should be from X and Z. And we want those elements to be the best in terms of max - min distance. There is an easy way to select them.
We make a bidirectional run (run left, the run right from current element) seeking for the first element-not-from-Y. In this case we would seek for two nearest elements from X and Z in two directions (4 elements total).
This finding method is what we need: if we select the first element of from X while running (left / right, doesn't matter), that element would suit us better than any other element that follows it in terms of distance. This happens because our S set is sorted.
In case of my example (counting the distance for element with index number 4), we would mark elements with indexes 6 and 7 as suitable from the right side and elements with indexes 1 and 3 from the left side.
Now, we have to test 4 cases that can happen - and take the case so that our distance is minimal. In our particular case we have the following (elements returned by the previous routine):
Z X Y X Z
2 5 8 11 12
We should test every (X, Y, Z) tuple that can be built using these elements, take the tuple with minimal distance and save that distance for our element. In this example, we would say that (11, 8, 12) tuple has the best distance of 4. So, we store d[5] = 4 (5 here is the element index).
Yielding the result
Now, when we know how to find the distance, let's do it for every element in our S set (this operation would take O(n2) in the worst case and better time - something like O(nlogn) in average).
After we have that distance value for every element in our set, just select the element with minimal distance and run our distance counting algorithm (which is described above) for it once again, but now save the (-, -, -) tuple. It would be the answer.
Pseudocode
Here is comes the pseudocode, I tried to make it easy to read, but it's implementation would be more complex, because you'll need to code set lookups *("determine set for element"). Also note that determine tuple and determine distance routines are basically the same, but the second yields the actual tuple.
COMBINE (X, Y, Z) -> S
SORT(S)
FOREACH (v in S)
DETERMINE_DISTANCE(v, S) -> d[v]
DETERMINE_TUPLE(MIN(d[v]))
P.S
I'm pretty sure that this method could be easily used for (-, -, -, ... -) tuple seeking, still resulting in good algorithmic complexity.
min = infinity (really large number in practice, like 1000000000)
solution = (-, -, -)
for each x E S1
for each y E S2
for each z E S3
t = max(x, y, z) - min(x, y, z)
if t < min
min = t
solution = (x, y, z)

Resources