Even distribution algorithm - algorithm

I am organizing a tournament where 12 players are going to meet each other on 10 board games.
I want each player to play at least one time with the 11 others players over the 10 board games.
For example :
BoardGame1 - Match1 - Player1 + Player2 + Player3
BoardGame1 - Match2 - Player4 + Player5 + Player6
BoardGame1 - Match3 - Player7 + Player8 + Player9
BoardGame1 - Match4 - Player10 + Player11 + Player12
[...]
BoardGame10 - Match1 - Player1 + Player11 + Player9
BoardGame10 - Match2 - Player4 + Player2 + Player12
BoardGame10 - Match3 - Player7 + Player5 + Player3
BoardGame10 - Match4 - Player10 + Player8 + Player6
How do you create an algorithm where you distribute the players evenly?
I'd like to do it with TDD approach, so I need to predict the expected result (meaning no random distribution).

If all players play each other exactly once, then the resulting object
would be a Kirkman Triple System. There is no KTS with the parameters
you want, but since each player has twenty opponent slots and only
eleven potential opponents, it should be easy to find a suitable
schedule.
The code below generates the (11 choose 2) × (8 choose 2) × (5 choose 2)
= 15400 possibilities for one game and repeatedly greedily chooses the
one that spreads the pairings in the fairest manner.
import collections
import itertools
import pprint
def partitions(players, k):
players = set(players)
assert len(players) % k == 0
if len(players) == 0:
yield []
else:
players = players.copy()
leader = min(players)
players.remove(leader)
for comb in itertools.combinations(sorted(players), k - 1):
group = {leader} | set(comb)
for part in partitions(players - group, k):
yield [group] + part
def update_pair_counts(pair_counts, game):
for match in game:
pair_counts.update(itertools.combinations(sorted(match), 2))
def evaluate_game(pair_counts, game):
pair_counts = pair_counts.copy()
update_pair_counts(pair_counts, game)
objective = [0] * max(pair_counts.values())
for count in pair_counts.values():
objective[count - 1] += 1
total = 0
for i in range(len(objective) - 1, -1, -1):
total += objective[i]
objective[i] = total
return objective
def schedule(n_groups, n_players_per_group, n_games):
games = list(partitions(range(n_groups * n_players_per_group), n_players_per_group))
pair_counts = collections.Counter()
for i in range(n_games):
game = max(games, key=lambda game: evaluate_game(pair_counts, game))
yield game
update_pair_counts(pair_counts, game)
def main():
pair_counts = collections.Counter()
for game in schedule(4, 3, 10):
pprint.pprint(game)
update_pair_counts(pair_counts, game)
print()
pprint.pprint(pair_counts)
if __name__ == "__main__":
main()
Sample output:
[{0, 1, 2}, {3, 4, 5}, {8, 6, 7}, {9, 10, 11}]
[{0, 3, 6}, {1, 4, 9}, {2, 10, 7}, {8, 11, 5}]
[{0, 4, 7}, {3, 1, 11}, {8, 9, 2}, {10, 5, 6}]
[{0, 1, 5}, {2, 11, 6}, {9, 3, 7}, {8, 10, 4}]
[{0, 8, 3}, {1, 10, 6}, {2, 11, 4}, {9, 5, 7}]
[{0, 10, 11}, {8, 1, 7}, {2, 3, 5}, {9, 4, 6}]
[{0, 9, 2}, {1, 10, 3}, {8, 4, 5}, {11, 6, 7}]
[{0, 4, 6}, {1, 2, 5}, {10, 3, 7}, {8, 9, 11}]
[{0, 5, 7}, {1, 11, 4}, {8, 2, 10}, {9, 3, 6}]
[{0, 3, 11}, {8, 1, 6}, {2, 4, 7}, {9, 10, 5}]
Counter({(0, 3): 3,
(0, 1): 2,
(0, 2): 2,
(1, 2): 2,
(3, 5): 2,
(4, 5): 2,
(6, 7): 2,
(6, 8): 2,
(7, 8): 2,
(9, 10): 2,
(9, 11): 2,
(10, 11): 2,
(0, 6): 2,
(3, 6): 2,
(1, 4): 2,
(4, 9): 2,
(2, 7): 2,
(2, 10): 2,
(7, 10): 2,
(5, 8): 2,
(8, 11): 2,
(0, 4): 2,
(0, 7): 2,
(4, 7): 2,
(1, 3): 2,
(1, 11): 2,
(3, 11): 2,
(2, 8): 2,
(2, 9): 2,
(8, 9): 2,
(5, 10): 2,
(6, 10): 2,
(0, 5): 2,
(1, 5): 2,
(2, 11): 2,
(6, 11): 2,
(3, 7): 2,
(3, 9): 2,
(7, 9): 2,
(4, 8): 2,
(8, 10): 2,
(1, 6): 2,
(1, 10): 2,
(2, 4): 2,
(4, 11): 2,
(5, 7): 2,
(5, 9): 2,
(0, 11): 2,
(1, 8): 2,
(2, 5): 2,
(4, 6): 2,
(6, 9): 2,
(3, 10): 2,
(3, 4): 1,
(1, 9): 1,
(5, 11): 1,
(5, 6): 1,
(2, 6): 1,
(4, 10): 1,
(0, 8): 1,
(3, 8): 1,
(0, 10): 1,
(1, 7): 1,
(2, 3): 1,
(0, 9): 1,
(7, 11): 1})

How do you create an algorithm where you distribute the players evenly? I'd like to do it with TDD approach, so I need to predict the expected result (meaning no random distribution).
TDD tends to be successful when your problem is that you know what the computer should do, and you know how to make the computer do that, but you don't know the best way to write the code.
When you don't know how to make the computer do what you want, TDD is a lot harder. There are two typical approaches taken here.
The first is to perform a "spike" - sit down and hack things until you understand how to make the computer do what you want. The key feature of spikes is that you don't get to keep the code changes at the end; instead, you discard your spiked code, keep what you have learned in your head, and start over by writing the tests that you need.
The second approach is to sort of sneak up on it - you do TDD for the very simple cases that you do understand, and keep adding tests that are just a little bit harder than what you have already done. See Robert Martin's Craftsman series for an example of this approach.
For this problem, you might begin by first thinking of an interface that you might use for accessing the algorithm. For instance, you might consider a design that accepts as input a number of players and a number of games, and returns you a sequence of tuples, where each tuple represents a single match.
Typically, this version of the interface will look like general purpose data structures as inputs (in this example: numbers), and general purpose data structures as outputs (the list of tuples).
Most commonly, we'll verify the behavior in each test by figuring out what the answer should be for a given set of inputs, and asserting that the actual data structure exactly matches the expected. For a list of tuples, that would look something like:
assert len(expected) == len(actual)
for x in range(actual):
assert len(expected[x]) == len(actual[x])
for y in range(actual[x]):
assert expected[x][y] == actual[x][y]
Although of course you could refactor that into something that looks nicer
assert expected == actual
Another possibility is to think about the properties that a solution should have, and verify that the actual result is consistent with those properties. Here, you seem to have two properties that are required for every solution:
Each pair of players should appear exactly once in the list of matches
Every player, boardgame pair should appear exactly once in the list of matches
In this case, the answer is easy enough to check (iterate through all of the matches, count each pair, assert count equals one).
The test themselves we introduce by starting with the easiest example we can think of. Here, that might be the case where we have 2 players and 1 board, and our answer should be
BoardGame1 - Match1 - Player1 + Player2
And so we write that test (RED), and hard code this specific answer (GREEN), and then (REFACTOR) the code so that it is clear to the reader why this is the correct answer for these inputs.
And when you are happy with that code, you look for the next example - an example where the current implementation returns the wrong answer, but the change that you need to make to get it to return the write answer is small/easy.
Often, what will happen is that we "pass" the next test using a branch:
if special_case(inputs):
return answer_for_special_case
else:
# ... real implementation here ...
return answer_for_general_case
And then refactor the code until the two blocks are the same, then finally remove the if clause.
It will sometimes happen that the new test is too big, and we can't figure out how to extend the algorithm to cover the new case. Usually the play is to revert any changes we've made (keeping the tests passing), and use what we have learned to find a different test that might be easier to introduce to the code.
And you keep iterating on this process until you have solved "all" of the problem.

Here is a resolvable triple system due to Haim Hanani (“On resolvable
balanced incomplete block designs”, 1974), which provides a schedule for
11 games (drop one). Unfortunately it repeats matches.
import collections
import itertools
from pprint import pprint
def Match(a, b, c):
return tuple(sorted([a, b, c]))
games = []
for j in range(4):
games.append(
[
Match(0 ^ j, 4 ^ j, 8 ^ j),
Match(1 ^ j, 2 ^ j, 3 ^ j),
Match(5 ^ j, 6 ^ j, 7 ^ j),
Match(9 ^ j, 10 ^ j, 11 ^ j),
]
)
games.append([Match(1 ^ j, 6 ^ j, 11 ^ j) for j in range(4)])
games.append([Match(2 ^ j, 7 ^ j, 9 ^ j) for j in range(4)])
games.append([Match(3 ^ j, 5 ^ j, 10 ^ j) for j in range(4)])
for j in range(4):
games.append(
[
Match(0 ^ j, 4 ^ j, 8 ^ j),
Match(1 ^ j, 6 ^ j, 11 ^ j),
Match(2 ^ j, 7 ^ j, 9 ^ j),
Match(3 ^ j, 5 ^ j, 10 ^ j),
]
)
for game in games:
game.sort()
pprint(len(games))
pprint(games)
pair_counts = collections.Counter()
for game in games:
for triple in game:
pair_counts.update(itertools.combinations(sorted(triple), 2))
pprint(max(pair_counts.values()))
Output:
11
[[(0, 4, 8), (1, 2, 3), (5, 6, 7), (9, 10, 11)],
[(0, 2, 3), (1, 5, 9), (4, 6, 7), (8, 10, 11)],
[(0, 1, 3), (2, 6, 10), (4, 5, 7), (8, 9, 11)],
[(0, 1, 2), (3, 7, 11), (4, 5, 6), (8, 9, 10)],
[(0, 7, 10), (1, 6, 11), (2, 5, 8), (3, 4, 9)],
[(0, 5, 11), (1, 4, 10), (2, 7, 9), (3, 6, 8)],
[(0, 6, 9), (1, 7, 8), (2, 4, 11), (3, 5, 10)],
[(0, 4, 8), (1, 6, 11), (2, 7, 9), (3, 5, 10)],
[(0, 7, 10), (1, 5, 9), (2, 4, 11), (3, 6, 8)],
[(0, 5, 11), (1, 7, 8), (2, 6, 10), (3, 4, 9)],
[(0, 6, 9), (1, 4, 10), (2, 5, 8), (3, 7, 11)]]
2

Combinatorial optimization is another possibility. This one doesn’t
scale super well but can handle 12 players/10 games.
import collections
import itertools
from pprint import pprint
def partitions(V):
if not V:
yield []
return
a = min(V)
V.remove(a)
for b, c in itertools.combinations(sorted(V), 2):
for part in partitions(V - {b, c}):
yield [(a, b, c)] + part
parts = list(partitions(set(range(12))))
from ortools.sat.python import cp_model
model = cp_model.CpModel()
vars = [model.NewBoolVar("") for part in parts]
model.Add(sum(vars) == 10)
pairs = collections.defaultdict(list)
for part, var in zip(parts, vars):
for (a, b, c) in part:
pairs[(a, b)].append(var)
pairs[(a, c)].append(var)
pairs[(b, c)].append(var)
for clique in pairs.values():
total = sum(clique)
model.Add(1 <= total)
model.Add(total <= 2)
solver = cp_model.CpSolver()
status = solver.Solve(model)
print(solver.StatusName(status))
schedule = []
for part, var in zip(parts, vars):
if solver.Value(var):
schedule.append(part)
pprint(schedule)
Sample output:
OPTIMAL
[[(0, 1, 6), (2, 4, 9), (3, 8, 11), (5, 7, 10)],
[(0, 1, 10), (2, 3, 5), (4, 8, 9), (6, 7, 11)],
[(0, 2, 4), (1, 8, 10), (3, 7, 11), (5, 6, 9)],
[(0, 2, 8), (1, 4, 11), (3, 7, 9), (5, 6, 10)],
[(0, 3, 6), (1, 4, 7), (2, 5, 8), (9, 10, 11)],
[(0, 3, 8), (1, 5, 11), (2, 6, 9), (4, 7, 10)],
[(0, 4, 5), (1, 2, 7), (3, 6, 10), (8, 9, 11)],
[(0, 5, 11), (1, 3, 9), (2, 6, 7), (4, 8, 10)],
[(0, 7, 9), (1, 6, 8), (2, 10, 11), (3, 4, 5)],
[(0, 9, 10), (1, 2, 3), (4, 6, 11), (5, 7, 8)]]

Related

Assigning color to nodes in a graph such that no two neighbor nodes to it has same color

If you see the above graph, no nodes next to each other has the same color. I created a grid graph with diagonal edges across nodes using networkx python and applied greedy color to it.
greed = nx.coloring.greedy_color(G)
print(greed)
which gives the output
{(1, 1): 0, (1, 2): 1, (1, 3): 0, (1, 4): 1, (1, 5): 0, (1, 6): 1, (1, 7): 0, (1, 8): 1, (2, 1): 2, (2, 2): 3, (2, 3): 2, (2, 4): 3, (2, 5): 2, (2, 6): 3, (2, 7): 2, (2, 8): 3, (3, 1): 0, (3, 2): 1, (3, 3): 0, (3, 4): 1, (3, 5): 0, (3, 6): 1, (3, 7): 0, (3, 8): 1, (4, 1): 2, (4, 2): 3, (4, 3): 2, (4, 4): 3, (4, 5): 2, (4, 6): 3, (4, 7): 2, (4, 8): 3, (5, 1): 0, (5, 2): 1, (5, 3): 0, (5, 4): 1, (5, 5): 0, (5, 6): 1, (5, 7): 0, (5, 8): 1, (6, 1): 2, (6, 2): 3, (6, 3): 2, (6, 4): 3, (6, 5): 2, (6, 6): 3, (6, 7): 2, (6, 8): 3, (7, 1): 0, (7, 2): 1, (7, 3): 0, (7, 4): 1, (7, 5): 0, (7, 6): 1, (7, 7): 0, (7, 8): 1, (8, 1): 2, (8, 2): 3, (8, 3): 2, (8, 4): 3, (8, 5): 2, (8, 6): 3, (8, 7): 2, (8, 8): 3, (0, 1): 2, (0, 2): 3, (0, 3): 2, (0, 4): 3, (0, 5): 2, (0, 6): 3, (0, 7): 2, (0, 8): 3, (1, 0): 1, (1, 9): 0, (2, 0): 3, (2, 9): 2, (3, 0): 1, (3, 9): 0, (4, 0): 3, (4, 9): 2, (5, 0): 1, (5, 9): 0, (6, 0): 3, (6, 9): 2, (7, 0): 1, (7, 9): 0, (8, 0): 3, (8, 9): 2, (9, 1): 0, (9, 2): 1, (9, 3): 0, (9, 4): 1, (9, 5): 0, (9, 6): 1, (9, 7): 0, (9, 8): 1, (0, 0): 3, (0, 9): 2, (9, 0): 1, (9, 9): 0}
after sorting
{(0, 0): 3, (0, 1): 2, (0, 2): 3, (0, 3): 2, (0, 4): 3, (0, 5): 2, (0, 6): 3, (0, 7): 2, (0, 8): 3, (0, 9): 2, (1, 0): 1, (1, 1): 0, (1, 2): 1, (1, 3): 0, (1, 4): 1, (1, 5): 0, (1, 6): 1, (1, 7): 0, (1, 8): 1, (1, 9): 0, (2, 0): 3, (2, 1): 2, (2, 2): 3, (2, 3): 2, (2, 4): 3, (2, 5): 2, (2, 6): 3, (2, 7): 2, (2, 8): 3, (2, 9): 2, (3, 0): 1, (3, 1): 0, (3, 2): 1, (3, 3): 0, (3, 4): 1, (3, 5): 0, (3, 6): 1, (3, 7): 0, (3, 8): 1, (3, 9): 0, (4, 0): 3, (4, 1): 2, (4, 2): 3, (4, 3): 2, (4, 4): 3, (4, 5): 2, (4, 6): 3, (4, 7): 2, (4, 8): 3, (4, 9): 2, (5, 0): 1, (5, 1): 0, (5, 2): 1, (5, 3): 0, (5, 4): 1, (5, 5): 0, (5, 6): 1, (5, 7): 0, (5, 8): 1, (5, 9): 0, (6, 0): 3, (6, 1): 2, (6, 2): 3, (6, 3): 2, (6, 4): 3, (6, 5): 2, (6, 6): 3, (6, 7): 2, (6, 8): 3, (6, 9): 2, (7, 0): 1, (7, 1): 0, (7, 2): 1, (7, 3): 0, (7, 4): 1, (7, 5): 0, (7, 6): 1, (7, 7): 0, (7, 8): 1, (7, 9): 0, (8, 0): 3, (8, 1): 2, (8, 2): 3, (8, 3): 2, (8, 4): 3, (8, 5): 2, (8, 6): 3, (8, 7): 2, (8, 8): 3, (8, 9): 2, (9, 0): 1, (9, 1): 0, (9, 2): 1, (9, 3): 0, (9, 4): 1, (9, 5): 0, (9, 6): 1, (9, 7): 0, (9, 8): 1, (9, 9): 0}
But I want it to be in such a way that no two adjacent/neighbor nodes to a node should have the same color
In the above figure, (1,4) [green] has its neighbors (1,3) [red] and (1,5) [red]. In this case both nodes next to node (1,4) are red. But I want (1,3) and (1,5) in different colors. Can anyone tell me how to solve this problem?
I tried greedy color method from networkx to color in such a way that no two nodes adjacent to each other have the same color.
The problem is that you have an additional constraint that the coloring algorithm does not respect. You have two choice : change the algorithm to respect the constraint (hard), change the data (the graph) so that the constraints are integrated in it.
The second option is really easy to do here. All we have to do is add edges between nodes that should not be the same color (that is, nodes that share a common neighbor), color the graph.
Create a deep copy G2 of the graph G. As we will modify the graph to match the new constraints, we have to keep the original intact.
For every pair of nodes n_1, n_2 in G :
If they are adjacent, nothing to do.
If they share a common neighbor in G, add an edge (n_1, n_2) in G2
Color G2
For every node in G, set it's color to the color of the corresponding node in G2
Have you tried the Graph Coloring algorithm?
Step 1 − Arrange the vertices of the graph in some order.
Step 2 − Choose the first vertex and color it with the first color.
Step 3 − Choose the next vertex and color it with the lowest numbered color that has not been colored on any vertices adjacent to it. If all the adjacent vertices are colored with this color, assign a new color to it. Repeat this step until all the vertices are colored.
credits : https://www.tutorialspoint.com/the-graph-coloring

Maximum sum of combinations (algorithm optimization)

I have a dictionary of length n choose m, with keys of tuples of length m containing all combinations of integers 1 to n. I would like to compute the maximum sum of the values of this dictionary such that the indices of the tuple keys are unique, storing the combinations that make up this maximum.
Example input:
input_dict = {
(1, 2): 16,
(1, 3): 4,
(1, 4): 13,
(1, 5): 8,
(1, 6): 9,
(2, 3): 6,
(2, 4): 19,
(2, 5): 7,
(2, 6): 16,
(3, 4): 12,
(3, 5): 23,
(3, 6): 12,
(4, 5): 17,
(4, 6): 19,
(5, 6): 21
}
Example output:
(((1, 2), (3, 5), (4, 6)), 58)
My current approach: compute the sum of all unique combinations of outputs and take the maximum.
gen_results = [{}]
for (key, val) in input_dict.items():
gen_results[0][(key,)] = val
i = 0
complete = False
while not complete:
complete = True
gen_results.append({})
for (combinations, running_sum) in gen_results[i].items():
for (key, val) in input_dict.items():
unique_combination = True
for combination in combinations:
for idx in key:
if idx in combination:
unique_combination = False
break
if not unique_combination:
break
if unique_combination:
complete = False
gen_results[i+1][combinations + (key,)] = running_sum + val
i += 1
generation_maximums = []
for gen_result in gen_results:
if gen_result == {}:
continue
generation_maximums.append(max(gen_result.items(), key=(lambda x: x[1])))
print(max(generation_maximums, key=(lambda x: x[1])))
How can I improve my algorithm for large n and m?
if you don't go for integer programming, you can often brute force these things using bits as hashes
e.g. the following outputs 58
input_dict = {
(1, 2): 16,
(1, 3): 4,
(1, 4): 13,
(1, 5): 8,
(1, 6): 9,
(2, 3): 6,
(2, 4): 19,
(2, 5): 7,
(2, 6): 16,
(3, 4): 12,
(3, 5): 23,
(3, 6): 12,
(4, 5): 17,
(4, 6): 19,
(5, 6): 21
}
dp = {}
n, m = 2, 6
for group, score in input_dict.items():
bit_hash = 0
for x in group:
bit_hash += 1 << (x-1)
dp[bit_hash] = score
while True:
items = dp.items()
for hash1, score1 in items:
for hash2, score2 in items:
if hash1 & hash2 == 0:
dp[hash1|hash2] = max(dp.get(hash1|hash2), score1+score2)
if len(dp) == (1<<m)/2-1:
print dp[(1<<m)-1]
break

Alternating product of two lists

If I use itertools.product with two lists, the nested for loop equivalent always cycles the second list first:
>>> from itertools import product
>>> list(product([1,2,3], [4,5,6]))
[(1, 4), (1, 5), (1, 6), (2, 4), (2, 5), (2, 6), (3, 4), (3, 5), (3, 6)]
But for some use cases I might want these orders to be alternating, much like popping the first items off each list, without actually popping them. The hypothetical function would first give [1, 4], then [2, 4] (1 popped), and then [2, 5] (4 popped), and then [3, 5], and finally [3, 6].
>>> list(hypothetical([1,2,3], [4,5,6]))
[(1, 4), (2, 4), (2, 5), (3, 5), (3, 6)]
The only method I can think of is yielding in a for loop with a "which list to pop from next" flag.
Is there a built-in or library method that does this? Will I have to write my own?
import itertools
L1 = [1, 2, 3]
L2 = [4, 5, 6]
print list(zip(itertools.islice((e for e in L1 for x in (1, 2)), 1, None), (e for e in L2 for x in (1, 2))))

X-Y Heuristic on the N-Puzzle

First of all I have seen this answer and yes it explains X-Y heuristic but the example board was too simple for me to understand the general heuristic.
X-Y heuristic function for solving N-puzzle
So could someone please explain the X-Y heuristic using this example?
8 1 2
7 3 6
0 5 4
The algorithm consists of 2 separate parts - for rows and columns.
1) Rows. Divide the input matrix by rows - elements from each row go to separate set.
(1, 2, 8) - (3, 6, 7) - (0, 4, 5)
The only available move is swaping 0 with an element from adjacent set.
You finish, when each element is in the proper set.
swap 0 and 7 -> (1, 2, 8) - (0, 3, 6) - (4, 5, 7)
swap 0 and 8 -> (0, 1, 2) - (3, 6, 8) - (4, 5, 7)
swap 0 and 3 -> (1, 2, 3) - (0, 6, 8) - (4, 5, 7)
swap 0 and 4 -> (1, 2, 3) - (4, 6, 8) - (0, 5, 7)
swap 0 and 8 -> (1, 2, 3) - (0, 4, 6) - (5, 7, 8)
swap 0 and 5 -> (1, 2, 3) - (4, 5, 6) - (0, 7, 8)
Number of required steps = 6.
2) Similarly for columns. You start with:
(0, 7, 8) - (1, 3, 5) - (2, 4 ,6)
And then
(1, 7, 8) - (0, 3, 5) - (2, 4, 6)
(0, 1, 7) - (3, 5, 8) - (2, 4, 6)
(1, 3, 7) - (0, 5, 8) - (2, 4, 6)
(1, 3, 7) - (2, 5, 8) - (0, 4, 6)
(1, 3, 7) - (0, 2, 5) - (4, 6, 8)
(0, 1, 3) - (2, 5, 7) - (4, 6, 8)
(1, 2, 3) - (0, 5, 7) - (4, 6, 8)
(1, 2, 3) - (4, 5, 7) - (0, 6, 8)
(1, 2, 3) - (0, 4, 5) - (6, 7, 8)
(1, 2, 3) - (4, 5, 6) - (0, 7, 8)
Number of required steps = 10
3) Total number of steps: 6 + 10 = 16

Obtaining all groups of n pairs of numbers that pass condition

I have a list with certain combinations between two numbers:
[1 2] [1 4] [1 6] [3 4] [5 6] [3 6] [2 3] [4 5] [2 5]
Now I want to make groups of 3 combinations, where each group contains all six digits once, e.g.:
[1 2] [3 6] [4 5] is valid
[1 4] [2 3] [5 6] is valid
[1 2] [2 3] [5 6] is invalid
Order is not important.
How can I arrive upon a list of all possible groups, without employing a brute forcing algorithm?
The language it is implemented in doesn't matter. Description of an algorithm that could achieve this is enough.
One thing to notice is that there are only finitely many possible pairs of elements you can pick from the set {1,2,3,4,5,6}. Specifically, there are (6P2) = 30 of them if you consider order relevant and (6 choose 2) = 15 if you don't. Even the simple "try all triples" algorithm that runs in cubic time in this case will only have to look at at most (30 choose 3) = 4,060 triples, which is a pretty small number. I doubt that you'd have any problems in practice just doing this.
Here's a recursive function in Python that picks a pair of numbers from a list, and then calls itself with the remaining list:
def pairs(l, picked, ok_pairs):
n = len(l)
for a in range(n-1):
for b in range(a+1,n):
pair = (l[a],l[b])
if pair not in ok_pairs:
continue
if picked and picked[-1][0] > pair[0]:
continue
p = picked+[pair]
if len(l) > 2:
pairs([m for i,m in enumerate(l) if i not in [a, b]], p, ok_pairs)
else:
print p
ok_pairs = set([(1, 2), (1, 4), (1, 6), (3, 4), (5, 6), (3, 6), (2, 3), (4, 5), (2, 5)])
pairs([1,2,3,4,5,6], [], ok_pairs)
The output (of 6 triplets) is:
[(1, 2), (3, 4), (5, 6)]
[(1, 2), (3, 6), (4, 5)]
[(1, 4), (2, 3), (5, 6)]
[(1, 4), (2, 5), (3, 6)]
[(1, 6), (2, 3), (4, 5)]
[(1, 6), (2, 5), (3, 4)]
Here's a version using Python set arithmetic:
pairs = [(1, 2), (1, 4), (1, 6), (3, 4), (5, 6), (3, 6), (2, 3), (4, 5), (2, 5)]
n = len(pairs)
for i in range(n-2):
set1 = set(pairs[i])
for j in range(i+1,n-1):
set2 = set(pairs[j])
if set1 & set2:
continue
for k in range(j+1,n):
set3 = set(pairs[k])
if set1 & set3 or set2 & set3:
continue
print pairs[i], pairs[j], pairs[k]
The output is:
(1, 2) (3, 4) (5, 6)
(1, 2) (3, 6) (4, 5)
(1, 4) (5, 6) (2, 3)
(1, 4) (3, 6) (2, 5)
(1, 6) (3, 4) (2, 5)
(1, 6) (2, 3) (4, 5)

Resources