Related
This is my first question. I tried to find an answer for 2 days but I couldn't find what I was looking for.
Question: How can I minimize the amount of matches between students from the same school
I have a very practical case, I need to arrange a competition (tournament bracket)
but some of the participants might come from the same school.
Those from the same school should be put as far as possible from each other
for example: {A A A B B C} => {A B}, {A C}, {A B}
if there are more than half participants from one school, then there would be no other way but to pair up 2 guys from the same school.
for example: {A A A A B C} => {A B}, {A C}, {A A}
I don't expect to get code, just some keywords or some pseudo code on what you think would be a way of making this would be of great help!
I tried digging into constraint resolution algorithms and tournament bracket algorithms, but they don't consider minimising the amount of matches between students from same school.
Well, thank you so much in advance!
A simple algorithm (EDIT 2)
From the comments below: you have a single elimination tournament. You must choose the places of the players in the tournament bracket. If you look at your bracket, you see: players, but also pairs of players (players that play the match 1 against each other), pairs of pairs of players (winner of pair 1 against winner of pair 2 for the match 2), and so on.
The idea
Sort the students by school, the schools with the more students before the ones with the less students. e.g A B B B B C C -> B B B B C C A.
Distribute the students in two groups A and B as in a war card game: 1st student in A, 2nd student in B, 3rd student in A, 4th student in B, ...
Continue with groups A and B.
You have a recursion: the position of a player in the level k-1 (k=n-1 to 0) is ((pos at level k) % 2) * 2^k + (pos at level k) // 2 (every even goes to the left, every odd goes to the right)
Python code
Sort array by number of schools:
assert 2**math.log2(len(players)) == len(players) # n is the number of rounds
c = collections.Counter([p.school for p in players])
players_sorted_by_school_count = sorted(players, key=lambda p:-c[p.school])
Find the final position of every player:
players_sorted_for_tournament = [-1] * 2**n
for j, player in enumerate(players_sorted_by_school_count):
pos = 0
for e in range(n-1,-1,-1):
if j % 2 == 1:
pos += 2**e # to the right
j = j // 2
players_sorted_for_tournament[pos] = player
This should give groups that are diverse enough, but I'm not sure whether it's optimal or not. Waiting for comments.
First version: how to make pairs from students of different schools
Just put the students from a same school into a stack. You have as many stack as schools. Now, sort your stacks by number of students. In your first example {A A A B B C}, you get:
A
A B
A B C
Now, take the two top elements from the two first stacks. The stack sizes have changed: if needed, reorder the stacks and continue. When you have only one stack, make pairs from this stack.
The idea is to keep as many "schools-stacks" as possible as long as possible: you spare the students of small stacks until you have no choice but to take them.
Steps with your second example, {A A A A B C}:
A
A
A
A B C => output A, B
A
A
A C => output A, C
A
A => output A A
It's a matching problem (EDIT 1)
I elaborate on the comments below. You have a single elimination tournament. You must choose the places of the players in the tournament bracket. If you look at your bracket, you see: players, but also pairs of players (players that play the match 1 against each other), pairs of pairs of players (winner of pair 1 against winner of pair 2 for the match 2), and so on.
Your solution is to start with the set of all players and split it into two sets that are as diverse a possible. "Diverse" means here: the maximum number of different schools. To do so, you check all possible combinations of elements that split the set into two subsets of equals size. Then you perform recursively the same operation on those sets, until you arrive to the player level.
Another idea is to start with players and try to make pairs with other players from other school. Let's define a distance: 1 if two players are in the same school, 0 if they are in a different school. You want to make pairs with the minimum global distance.
This distance may be generalized for the pairs of players: take the number of common schools. That is: A B A B -> 2 (A & B), A B A C -> 1 (A), A B C D -> 0. You can imagine the distance between two sets (players, pairs, pairs of pairs, ...): the number of common schools. Now you can see this as a graph whose vertices are the sets (players, pairs, pairs of pairs, ...) and whose edges connect every pair of vertices with a weight that is the distance defined above. You are looking for a perfect matching (all vertices are matched) with a minimum weight.
The blossom algorithm or some of its variants seems to fit your needs, but it's probably overkill if the number of players is limited.
Create a two-dimensional array, where the first dimension will be for each school and the second dimension will be for each participant in this take-off.
Load them and you'll have everything you need linearly.
For example:
School 1 ------- Schol 2 -------- School 3
A ------------ B ------------- C
A ------------ B ------------- C
A ------------ B ------------- C
A ------------ B
A ------------ B
A
A
In the example above, we will have 3 schools (first dimension), with school 1 having 7 participants (second dimension), school 2 having 5 participants and school 3 having 3 participants.
You can also create a second array containing the resulting combinations and, for each chosen pair, delete this pair from the initial array in a loop until it is completely empty and the result array is completely full.
I think the algorithm in this answer could help.
Basically: group the students by school, and use the error tracking idea behind Bresenham's Algorithm to distribute the schools as far apart as possible. Then you pull out pairs from the list.
Given a list L of an even number (2k) of elements, I'm looking for an algorithm to produce a list of 2k-1 sublists with the following properties:
each sublist includes exactly k 2-combinations (pairs where the order does not matter) of elements from L,
each sublist includes every elements from L exactly once, and
the union of all elements from all sublists is exactly the set of all possible 2-combinations of the elements from L.
For example, if the input list is L = [a, b, c, d], we have k = 2 with 3 sublists, each including 2 pairs. A possible solution would look like [[ab, cd], [ac, bd], [ad, bc]]. If we ignore the ordering for all elements in the lists (think of all lists as sets), it turns out that this is also the only solution for k = 2.
My aim now is not only to find a single solution but all possible solutions. As the number of involved combinations grows pretty quickly, it would be nice to have all results be constructed in a clever way instead of generating a huge list of candidates and removing the elements from it that don't satisfy the given properties. Such a naïve algorithm could look like the following:
Find the set C of all 2-combinations for L.
Find the set D of all k-combinations for C.
Choose all sets from D that union equals L, call the new set D'.
Find the set E of all (2k-1)-combinations for D'.
Choose all sets from E that union is the set C, and let the new set be the final output.
This algorithm is easy to implement but it's incredibly slow for bigger input lists. So is there a way to construct the result list more efficently?
Edit: Here is the result for L = [a,b,c,d,e,f] with k = 3, calculated by the above algorithm:
[[[ab,cd,ef],[ac,be,df],[ad,bf,ce],[ae,bd,cf],[af,bc,de]],
[[ab,cd,ef],[ac,bf,de],[ad,be,cf],[ae,bc,df],[af,bd,ce]],
[[ab,ce,df],[ac,bd,ef],[ad,be,cf],[ae,bf,cd],[af,bc,de]],
[[ab,ce,df],[ac,bf,de],[ad,bc,ef],[ae,bd,cf],[af,be,cd]],
[[ab,cf,de],[ac,bd,ef],[ad,bf,ce],[ae,bc,df],[af,be,cd]],
[[ab,cf,de],[ac,be,df],[ad,bc,ef],[ae,bf,cd],[af,bd,ce]]]
All properties are satisfied:
each sublist has k = 3 2-combinations,
each sublist only includes each element once, and
the union of all 2k-1 = 5 sublists for one solution is exactly the set of all possible 2-combinations for L.
Edit 2: Based on user58697's answer, I improved the calculation algorithm by using the round-robin tournament scheduling:
Let S be the result set, starting with an empty set, and P be the set of all permutations of L.
Repeat the following until P is empty:
Select an arbitrary permutation from P
Perform full RRT scheduling for this permutation. In each round, the arrangement of elements from L forms a permutation of L. Remove all these 2k permutations from P.
Add the resulting schedule to S.
Remove all lists from S if the union of their sublists has duplicate elements (i.e. doesn't add up to all 2-combinations of L).
This algorithm is much more performant than the first one. I was able to calculate the number of results for k = 4 as 960 and k = 5 as 67200. The fact that there doesn't seem to be an OEIS result for this sequence makes me wonder if the numbers are actually correct, though, i.e. if the algorithm is producing the complete solution set.
It is a round-robin tournament scheduling:
A pair is a match,
A list is a round (each team plays with some other team)
A set of list is an entire tournament (each team plays each other team exactly once).
Take a look here.
This was an interesting question. In the process of answering it (basically after writing the program included below, and looking up the sequence on OEIS), I learned that the problem has a name and rich theory: what you want is to generate all 1-factorizations of the complete graph K2k.
Let's first restate the problem in that language:
You are given a number k, and a list (set) L of size 2k. We can view L as the vertex set of a complete graph K2k.
For example, with k=3, L could be {a, b, c, d, e, f}
A 1-factor (aka perfect matching) is a partition of L into unordered pairs (sets of size 2). That is, it is a set of k pairs, whose disjoint union is L.
For example, ab-cd-ef is a 1-factor of L = {a, b, c, d, e, f}. This means that a is matched with b, c is matched with d, and e is matched with f. This way, L has been partitioned into three sets {a, b}, {c, d}, and {e, f}, whose union is L.
Let S (called C in the question) denote the set of all pairs of elements of L. (In terms of the complete graph, if L is its vertex set, S is its edge set.) Note that S contains (2k choose 2) = k(2k-1) pairs. So for k = 0, 1, 2, 3, 4, 5, 6…, S has size 0, 1, 6, 15, 28, 45, 66….
For example, S = {ab, ac, ad, ae, af, bc, bd, be, bf, cd, ce, cf, de, df, ef} for our L above (k = 3, so |S| = k(2k-1) = 15).
A 1-factorization is a partition of S into sets, each of which is itself a 1-factor (perfect matching). Note that as each of these matchings has k pairs, and S has size k(2k-1), the partition has size 2k-1 (i.e., is made of 2k-1 matchings).
For example, this is a 1-factorization: {ab-cd-ef, ac-be-df, ad-bf-ce, ae-bd-cf, af-bc-de}
In other words, every element of S (every pair) occurs in exactly one element of the 1-factorization, and every element of L occurs exactly once in each element of the 1-factorization.
The problem asks to generate all 1-factorizations.
Let M denote the set of all 1-factors (all perfect matchings) of L. It is easy to prove that M contains (2k)!/(k!2^k) = 1×3×5×…×(2k-1) matchings. For k = 0, 1, 2, 3, 4, 5, 6…, the size of M is 1, 1, 3, 15, 105, 945, 10395….
For example, for our L above, M = {ab-cd-ef, ab-ce-df, ab-cf-de, ac-bd-ef, ac-be-df, ac-bf-de, ad-bc-ef, ad-be-cf, ad-bf-ce, ae-bc-df, ae-bd-cf, ae-bf-cd, af-bc-de, af-bd-ce, af-be-cd} (For k=3 this number 15 is the same as the number of pairs, but this is just a coincidence as you can from the other numbers: this number grows much faster than the number of pairs.)
M is easy to generate:
def perfect_matchings(l):
if len(l) == 0:
yield []
for i in range(1, len(l)):
first_pair = l[0] + l[i]
for matching in perfect_matchings(l[1:i] + l[i+1:]):
yield [first_pair] + matching
For example, calling perfect_matchings('abcdef') yields the 15 elements ['ab', 'cd', 'ef'], ['ab', 'ce', 'df'], ['ab', 'cf', 'de'], ['ac', 'bd', 'ef'], ['ac', 'be', 'df'], ['ac', 'bf', 'de'], ['ad', 'bc', 'ef'], ['ad', 'be', 'cf'], ['ad', 'bf', 'ce'], ['ae', 'bc', 'df'], ['ae', 'bd', 'cf'], ['ae', 'bf', 'cd'], ['af', 'bc', 'de'], ['af', 'bd', 'ce'], ['af', 'be', 'cd'] as expected.
By definition, a 1-factorization is a partition of S into elements from M. Or equivalently, any (2k-1) disjoint elements of M form a 1-factorization. This lends itself to a straightforward backtracking algorithm:
start with an empty list (partial factorization)
for each matching from the list of perfect matchings, try adding it to the current partial factorization, i.e. check whether it's disjoint (it should not contain any pair already used)
if fine, add it to the partial factorization, and try extending
In code:
matching_list = []
pair_used = defaultdict(lambda: False)
known_matchings = [] # Populate this list using perfect_matchings()
def extend_matching_list(r, need):
"""Finds ways of extending the matching list by `need`, using matchings r onwards."""
if need == 0:
use_result(matching_list)
return
for i in range(r, len(known_matchings)):
matching = known_matchings[i]
conflict = any(pair_used[pair] for pair in matching)
if conflict:
continue # Can't use this matching. Some of its pairs have already appeared.
# Else, use this matching in the current matching list.
for pair in matching:
pair_used[pair] = True
matching_list.append(matching)
extend_matching_list(i + 1, need - 1)
matching_list.pop()
for pair in matching:
pair_used[pair] = False
If you call it with extend_matching_list(0, len(l) - 1) (after populating known_matchings), it generates all 1-factorizations. I've put the full program that does this here. With k=4 (specifically, the list 'abcdefgh'), it outputs 6240 1-factorizations; the full output is here.
It was at this point that I fed the sequence 1, 6, 6240 into OEIS, and discovered OEIS A000438, sequence 1, 1, 6, 6240, 1225566720, 252282619805368320,…. It shows that for k=6, the number of solutions ≈2.5×1017 means that we can give up hope of generating all solutions. Even for k=5, the ≈1 billion solutions (recall that we're trying to find 2k-1=9 disjoint sets out of the |M|=945 matchings) will require some carefully optimized programs.
The first optimization (which, embarrassingly, I only realized later by looking closely at trace output for k=4) is that (under natural lexicographic numbering) the index of the first matching chosen in the partition cannot be greater than the number of matchings for k-1. This is because the lexicographically first element of S (like "ab") occurs only in those matchings, and if we start later than this one we'll never find it again in any other matching.
The second optimization comes from the fact that the bottleneck of a backtracking program is usually the testing for whether a current candidate is admissible. We need to test disjointness efficiently: whether a given matching (in our partial factorization) is disjoint with the union of all previous matchings. (Whether any of its k pairs is one of the pairs already covered by earlier matchings.) For k=5, it turns out that the size of S, which is (2k choose 2) = 45, is less than 64, so we can compactly represent a matching (which is after all a subset of S) in a 64-bit integer type: if we number the pairs as 0 to 44, then any matching can be represented by an integer having 1s in the positions corresponding to elements it contains. Then testing for disjointness is a simple bitwise operation on integers: we just check whether the bitwise-AND of the current candidate matching and the cumulative union (bitwise-OR) of previous matchings in our partial factorization is zero.
A C++ program that does this is here, and just the backtracking part (specialized for k=5) does not need any C++ features so it's extracted out as a C program here. It runs in about 4–5 hours on my laptop, and finds all 1225566720 1-factorizations.
Another way to look at this problem is to say that two elements of M have an edge between them if they intersect (have a pair (element of S) in common), and that we're looking for all maximum independent set in M. Again, the simplest way to solve that problem would still probably be backtracking (we'd write the same program).
Our programs can be made quite a lot more efficient by exploiting the symmetry in our problem: for example we could pick any matching as our first 1-factor in the 1-factorization (and then generate the rest by relabelling, being careful not to avoid duplicates). This is how the number of 1-factorizations for K12 (the current record) was calculated.
A note on the wisdom of generating all solutions
In The Art of Computer Programming Volume 4A, at the end of section 7.2.1.2 Generating All Permutations, Knuth has this important piece of advice:
Think twice before you permute. We have seen several attractive algorithms for permutation generation in this section, but many algorithms are known by which permutations that are optimum for particular purposes can be found without running through all possibilities. For example, […] the best way to arrange records on a sequential storage […] takes only O(n log n) steps. […] the assignment problem, which asks how to permute the columns of a square matrix so that the sum of the diagonal elements is maximized […] can be solved in at most O(n3) operations, so it would be foolish to use a method of order n! unless n is extremely small. Even in cases like the traveling salesrep problem, when no efficient algorithm is known, we can usually find a much better approach than to examine every possible solution. Permutation generation is best used when there is good reason to look at each permutation individually.
This is what seems to have happened here (from the comments below the question):
I wanted to calculate all solutions to run different attribute metrics on these and find an optional match […]. As the number of results seems to grow quicker than expected, this is impractical.
Generally, if you're trying to "generate all solutions" and you don't have a very good reason for looking at each one (and one almost never does), there are many other approaches that are preferable, ranging from directly trying to solve an optimization problem, to generating random solutions and looking at them, or generating solutions from some subset (which is what you seem to have done).
Further reading
Following up references from OEIS led to a rich history and theory.
On 1-factorizations of the complete graph and the relationship to round robin schedules, Gelling (M. A. Thesis), 1973
On the number of 1-factorizations of the complete graph, Charles C Lindner, Eric Mendelsohn, Alexander Rosa (1974?) -- this shows that the number of nonisomorphic 1-factorizations on K2n goes to infinity as n goes to infinity.
E. Mendelsohn and A. Rosa. On some properties of 1-factorizations of complete graphs. Congr. Numer, 24 (1979): 739–752
E. Mendelsohn and A. Rosa. One factorizations of the complete graph: A survey. Journal of Graph Theory, 9 (1985): 43–65 (As long ago as 1985, this exact question was studied well-enough to need a survey!)
Via papers of Dinitiz:
D. K. Garnick and J. H. Dinitz, On the number of one-factorizations of the complete graph on 12 points, Congressus Numerantium, 94 (1993), pp. 159-168. They announced they were computing the number of nonisomorphic 1-factorizations of K12. Their algorithm was basically backtracking.
Jeffrey H. Dinitz, David K. Garnick, Brendan D. McKay: There are 526,915,620 nonisomorphic one-factorizations of K12 (also here), Journal of Combinatorial Designs 2 (1994), pp. 273 - 285: They completed the computation, and reported the numbers they found for K12 (526,915,620 nonisomorphic, 252,282,619,805,368,320 total).
Various One-Factorizations of Complete Graphs by Gopal, Kothapalli, Venkaiah, Subramanian (2007). A paper that is relevant to this question, and has many useful references.
W. D. Wallis, Introduction to Combinatorial Designs, Second Edition (2007). Chapter 10 is "One-Factorizations", Chapter 11 is "Applications of One-Factorizations". Both are very relevant and have many useful references.
Charles J. Colbourn and Jeffrey H. Dinitz, Handbook of Combinatorial Designs, Second Edition (2007). A goldmine. See chapters VI.3 Balanced Tournament Designs, VI.51 Scheduling a Tournament, VII.5 Factorizations of Graphs (including its sections 5.4 Enumeration and Tables, 5.5 Some 1-Factorizations of Complete Graphs), VII.6 Computational Methods in Design Theory (6.2 Exhaustive Search). This last chapter references:
[715] How K12 was calculated ("orderly algorithm"), a backtracking -- the Dinitz-Garnick-McKay paper mentioned above
[725] “Contains, among many other subjects related to factorization, a fast algorithm for finding 1-factorizations of K2n.” ("Room squares and related designs", J. H. Dinitz and S. R. Stinson)
[1270] (P. Kaski and P. R. J. Östergård, One-factorizations of regular graphs of order 12, Electron. J. Comb. 12, Research Paper 2, 25 pp. (2005))
[1271] “Contains the 1-factorizations of complete graphs up to order 10 in electronic form.” (P. Kaski and P. R. J. Östergård, Classification Algorithms for Codes and Designs, Springer, Berlin, 2006.)
[1860] “A survey on perfect 1-factorizations of K2n” (E. S. Seah, Perfect one-factorizations of the complete graph—A survey, Bull. Inst. Combin. Appl. 1 (1991) 59–70)
[2107] “A survey of 1-factorizations of complete graphs including most of the material of this chapter.” W. D. Wallis, One-factorizations of complete graphs, in Dinitz and Stinson (ed), Contemporary Design Theory, 1992
[2108] “A book on 1-factorizations of graphs.” W. D. Wallis, "One-Factorizations", Kluwer, Dordrecht, 1997
Some other stuff:
*Factors and Factorizations of Graphs by Jin Akiyama and Mikio Kano (2007). This looks like a great book. “Frank Harary predicted that graph theory will grow so much that each chapter of his book Graph Theory will eventually expand to become a book on its own. He was right. This book is an expansion of his Chapter 9, Factorization.” There's not much about this particular topic (1-factorizations of complete graphs), but there is a proof in Chapter 4 (Theorem 4.1.1) that K2n always has a 1-factorization.
Papers on special types of 1-factorizations:
[Symmetry Groups Of] Some Perfect 1-Factorizations Of Complete Graphs, B. A. Anderson, 1977 (1973). Considers 1-factorizations that are in fact "perfect", having the property that the union of any two 1-factors (matchings) is a Hamiltonian cycle. (There's one up to isomorphism for K2k k ≤ 5, and two for K12.)
On 4-semiregular 1-factorizations of complete graphs and complete bipartite graphs.
Low Density MDS Codes and Factors of Complete Graphs -- also about perfect 1-factorizations
Self-invariant 1-Factorizations of Complete Graphs and Finite Bol Loops of Exponent 2
See also OEIS index entry for [sequences related to tournaments].
AMS feature column: Mathematics and Sports (April 2010) -- despite the overly broad name, is quite related.
I need an algorithm to find, what I call, "ordered combinations" (Maybe someone knows the real name for this if there is one).
Of course I already tried to come up with an algorithm on my own but I'm really stuck.
How it should work:
Given 2 lists (not sets, order is important here!) of elements that are guaranteed to contain the same elements, all ordered combinations.
An ordered combination is a 2-tuple, 3-tuple, ... n-tuple (no limit on N) of elements that appear in the same order in both lists.
Its entirely possible that an element occurs more than once in a list.
But every element from one list is guaranteed to appear at least once in the other list.
It does not matter if the output contains a combination more than once.
I'm not really sure if that makes it clear so here are multiple examples:
(List1, List2, Expected Result, Annotation)
ASDF
ADSF
Result: AS, AD, AF, SF, DF, ASF, ADF
Note: ASD is not a valid result because there is no way to have ascending indices in the second list for this combination
ADSD
ASDD
Result: AD, AS, AD, DD, SD, ASD, ADD
Note: AD appears twice because it can be created from indices 1,2 and 1,4 and in the second list 1,3 and 1,4. But it would also be correct if it only appears once. Also D appears twice in both lists in an order, so this allows ADD as a valid combination too.
SDFG
SDFG
Result: SD, SF, SG, DF, DG, FG, SDF, SFG, SDG, DFG, SDFG,
Note: Same input; all combinations are possible
ABCDEFG
GFEDCBA
Result: <empty>
Note: There are no combinations that appear in the same order in both lists
QWRRRRRRR
WRQ
Result: WR
Note: The only combination that appears in the same order in both sets is WR
Notes:
While it's a language agnostic algorithm I'd prefer answers that contain either C# or pseudo-code so I can understand them.
I realized that longer combinations are always made up from shorter combinations. Example: SDF can only be a valid result if SD and DF are possible too. Maybe this helps to make the algorithm more performant by building the longer combinations from the shorter ones.
Speed is of great importance here. This is algorithm will be used in realtime!
If it's not clear how the algorithm works, drop a comment. I'll add an example to clarify it.
Maybe this problem is already known and solved, but I don't know the proper name for it.
I would describe this problem as enumerating common subsequences of two strings. As a first cut, make a method like this, which chooses the first letter nondeterministically and recurses (Python, sorry).
def commonsubseqs(word1, word2, prefix=''):
if len(prefix) >= 2:
print(prefix)
for letter in set(word1) & set(word2): # set intersection
# figure out what's left after consuming the first instance of letter
remainder1 = word1[word1.index(letter) + 1:]
remainder2 = word2[word2.index(letter) + 1:]
# take letter and recurse
commonsubseqs(remainder1, remainder2, prefix + letter)
If this simple solution is not fast enough for you, then it can be improved as follows. For each pair of suffixes of the two words, we precompute the list of recursive calls. In Python again:
def commonsubseqshelper(table, prefix, i, j):
if len(prefix) >= 2:
print(''.join(prefix))
for (letter, i1, j1) in table[i][j]:
prefix.append(letter)
commonsubseqshelper(table, prefix, i1, j1)
del prefix[-1] # delete the last item
def commonsubseqs(word1, word2):
table = [[[(letter, word1.index(letter, i) + 1, word2.index(letter, j) + 1)
for letter in set(word1[i:]) & set(word2[j:])]
for j in range(len(word2) + 1)] # 0..len(word2)
for i in range(len(word1) + 1)] # 0..len(word1)
commonsubseqshelper(table, [], 0, 0)
This polynomial-time preprocessing step improves the speed of enumeration to its asymptotic optimum.
I have a list of elements, each one identified with a type, I need to reorder the list to maximize the minimum distance between elements of the same type.
The set is small (10 to 30 items), so performance is not really important.
There's no limit about the quantity of items per type or quantity of types, the data can be considered random.
For example, if I have a list of:
5 items of A
3 items of B
2 items of C
2 items of D
1 item of E
1 item of F
I would like to produce something like:
A, B, C, A, D, F, B, A, E, C, A, D, B, A
A has at least 2 items between occurences
B has at least 4 items between occurences
C has 6 items between occurences
D has 6 items between occurences
Is there an algorithm to achieve this?
-Update-
After exchanging some comments, I came to a definition of a secondary goal:
main goal: maximize the minimum distance between elements of the same type, considering only the type(s) with less distance.
secondary goal: maximize the minimum distance between elements on every type. IE: if a combination increases the minimum distance of a certain type without decreasing other, then choose it.
-Update 2-
About the answers.
There were a lot of useful answers, although none is a solution for both goals, specially the second one which is tricky.
Some thoughts about the answers:
PengOne: Sounds good, although it doesn't provide a concrete implementation, and not always leads to the best result according to the second goal.
Evgeny Kluev: Provides a concrete implementation to the main goal, but it doesn't lead to the best result according to the secondary goal.
tobias_k: I liked the random approach, it doesn't always lead to the best result, but it's a good approximation and cost effective.
I tried a combination of Evgeny Kluev, backtracking, and tobias_k formula, but it needed too much time to get the result.
Finally, at least for my problem, I considered tobias_k to be the most adequate algorithm, for its simplicity and good results in a timely fashion. Probably, it could be improved using Simulated annealing.
First, you don't have a well-defined optimization problem yet. If you want to maximized the minimum distance between two items of the same type, that's well defined. If you want to maximize the minimum distance between two A's and between two B's and ... and between two Z's, then that's not well defined. How would you compare two solutions:
A's are at least 4 apart, B's at least 4 apart, and C's at least 2 apart
A's at least 3 apart, B's at least 3 apart, and C's at least 4 apart
You need a well-defined measure of "good" (or, more accurately, "better"). I'll assume for now that the measure is: maximize the minimum distance between any two of the same item.
Here's an algorithm that achieves a minimum distance of ceiling(N/n(A)) where N is the total number of items and n(A) is the number of items of instance A, assuming that A is the most numerous.
Order the item types A1, A2, ... , Ak where n(Ai) >= n(A{i+1}).
Initialize the list L to be empty.
For j from k to 1, distribute items of type Ak as uniformly as possible in L.
Example: Given the distribution in the question, the algorithm produces:
F
E, F
D, E, D, F
D, C, E, D, C, F
B, D, C, E, B, D, C, F, B
A, B, D, A, C, E, A, B, D, A, C, F, A, B
This sounded like an interesting problem, so I just gave it a try. Here's my super-simplistic randomized approach, done in Python:
def optimize(items, quality_function, stop=1000):
no_improvement = 0
best = 0
while no_improvement < stop:
i = random.randint(0, len(items)-1)
j = random.randint(0, len(items)-1)
copy = items[::]
copy[i], copy[j] = copy[j], copy[i]
q = quality_function(copy)
if q > best:
items, best = copy, q
no_improvement = 0
else:
no_improvement += 1
return items
As already discussed in the comments, the really tricky part is the quality function, passed as a parameter to the optimizer. After some trying I came up with one that almost always yields optimal results. Thank to pmoleri, for pointing out how to make this a whole lot more efficient.
def quality_maxmindist(items):
s = 0
for item in set(items):
indcs = [i for i in range(len(items)) if items[i] == item]
if len(indcs) > 1:
s += sum(1./(indcs[i+1] - indcs[i]) for i in range(len(indcs)-1))
return 1./s
And here some random result:
>>> print optimize(items, quality_maxmindist)
['A', 'B', 'C', 'A', 'D', 'E', 'A', 'B', 'F', 'C', 'A', 'D', 'B', 'A']
Note that, passing another quality function, the same optimizer could be used for different list-rearrangement tasks, e.g. as a (rather silly) randomized sorter.
Here is an algorithm that only maximizes the minimum distance between elements of the same type and does nothing beyond that. The following list is used as an example:
AAAAA BBBBB CCCC DDDD EEEE FFF GG
Sort element sets by number of elements of each type in descending order. Actually only largest sets (A & B) should be placed to the head of the list as well as those element sets that have one element less (C & D & E). Other sets may be unsorted.
Reserve R last positions in the array for one element from each of the largest sets, divide the remaining array evenly between the S-1 remaining elements of the largest sets. This gives optimal distance: K = (N - R) / (S - 1). Represent target array as a 2D matrix with K columns and L = N / K full rows (and possibly one partial row with N % K elements). For example sets we have R = 2, S = 5, N = 27, K = 6, L = 4.
If matrix has S - 1 full rows, fill first R columns of this matrix with elements of the largest sets (A & B), otherwise sequentially fill all columns, starting from last one.
For our example this gives:
AB....
AB....
AB....
AB....
AB.
If we try to fill the remaining columns with other sets in the same order, there is a problem:
ABCDE.
ABCDE.
ABCDE.
ABCE..
ABD
The last 'E' is only 5 positions apart from the first 'E'.
Sequentially fill all columns, starting from last one.
For our example this gives:
ABFEDC
ABFEDC
ABFEDC
ABGEDC
ABG
Returning to linear array we have:
ABFEDCABFEDCABFEDCABGEDCABG
Here is an attempt to use simulated annealing for this problem (C sources): http://ideone.com/OGkkc.
I believe you could see your problem like a bunch of particles that physically repel eachother. You could iterate to a 'stable' situation.
Basic pseudo-code:
force( x, y ) = 0 if x.type==y.type
1/distance(x,y) otherwise
nextposition( x, force ) = coined?(x) => same
else => x + force
notconverged(row,newrow) = // simplistically
row!=newrow
row=[a,b,a,b,b,b,a,e];
newrow=nextposition(row);
while( notconverged(row,newrow) )
newrow=nextposition(row);
I don't know if it converges, but it's an idea :)
I'm sure there may be a more efficient solution, but here is one possibility for you:
First, note that it is very easy to find an ordering which produces a minimum-distance-between-items-of-same-type of 1. Just use any random ordering, and the MDBIOST will be at least 1, if not more.
So, start off with the assumption that the MDBIOST will be 2. Do a recursive search of the space of possible orderings, based on the assumption that MDBIOST will be 2. There are a number of conditions you can use to prune branches from this search. Terminate the search if you find an ordering which works.
If you found one that works, try again, under the assumption that MDBIOST will be 3. Then 4... and so on, until the search fails.
UPDATE: It would actually be better to start with a high number, because that will constrain the possible choices more. Then gradually reduce the number, until you find an ordering which works.
Here's another approach.
If every item must be kept at least k places from every other item of the same type, then write down items from left to right, keeping track of the number of items left of each type. At each point put down an item with the largest number left that you can legally put down.
This will work for N items if there are no more than ceil(N / k) items of the same type, as it will preserve this property - after putting down k items we have k less items and we have put down at least one of each type that started with at ceil(N / k) items of that type.
Given a clutch of mixed items you could work out the largest k you can support and then lay out the items to solve for this k.
I have a symmetric matrix like shown in the image attached below.
I've made up the notation A.B which represents the value at grid point (A, B). Furthermore, writing A.B.C gives me the minimum grid point value like so: MIN((A,B), (A,C), (B,C)).
As another example A.B.D gives me MIN((A,B), (A,D), (B,D)).
My goal is to find the minimum values for ALL combinations of letters (not repeating) for one row at a time e.g for this example I need to find min values with respect to row A which are given by the calculations:
A.B = 6
A.C = 8
A.D = 4
A.B.C = MIN(6,8,6) = 6
A.B.D = MIN(6, 4, 4) = 4
A.C.D = MIN(8, 4, 2) = 2
A.B.C.D = MIN(6, 8, 4, 6, 4, 2) = 2
I realize that certain calculations can be reused which becomes increasingly important as the matrix size increases, but the problem is finding the most efficient way to implement this reuse.
Can point me in the right direction to finding an efficient algorithm/data structure I can use for this problem?
You'll want to think about the lattice of subsets of the letters, ordered by inclusion. Essentially, you have a value f(S) given for every subset S of size 2 (that is, every off-diagonal element of the matrix - the diagonal elements don't seem to occur in your problem), and the problem is to find, for each subset T of size greater than two, the minimum f(S) over all S of size 2 contained in T. (And then you're interested only in sets T that contain a certain element "A" - but we'll disregard that for the moment.)
First of all, note that if you have n letters, that this amounts to asking Omega(2^n) questions, roughly one for each subset. (Excluding the zero- and one-element subsets and those that don't include "A" saves you n + 1 sets and a factor of two, respectively, which is allowed for big Omega.) So if you want to store all these answers for even moderately large n, you'll need a lot of memory. If n is large in your applications, it might be best to store some collection of pre-computed data and do some computation whenever you need a particular data point; I haven't thought about what would work best, but for example computing data only for a binary tree contained in the lattice would not necessarily help you anything beyond precomputing nothing at all.
With these things out of the way, let's assume you actually want all the answers computed and stored in memory. You'll want to compute these "layer by layer", that is, starting with the three-element subsets (since the two-element subsets are already given by your matrix), then four-element, then five-element, etc. This way, for a given subset S, when we're computing f(S) we will already have computed all f(T) for T strictly contained in S. There are several ways that you can make use of this, but I think the easiest might be to use two such subset S: let t1 and t2 be two different elements of T that you may select however you like; let S be the subset of T that you get when you remove t1 and t2. Write S1 for S plus t1 and write S2 for S plus t2. Now every pair of letters contained in T is either fully contained in S1, or it is fully contained in S2, or it is {t1, t2}. Look up f(S1) and f(S2) in your previously computed values, then look up f({t1, t2}) directly in the matrix, and store f(T) = the minimum of these 3 numbers.
If you never select "A" for t1 or t2, then indeed you can compute everything you're interested in while not computing f for any sets T that don't contain "A". (This is possible because the steps outlined above are only interesting whenever T contains at least three elements.) Good! This leaves just one question - how to store the computed values f(T). What I would do is use a 2^(n-1)-sized array; represent each subset-of-your-alphabet-that-includes-"A" by the (n-1) bit number where the ith bit is 1 whenever the (i+1)th letter is in that set (so 0010110, which has bits 2, 4, and 5 set, represents the subset {"A", "C", "D", "F"} out of the alphabet "A" .. "H" - note I'm counting bits starting at 0 from the right, and letters starting at "A" = 0). This way, you can actually iterate through the sets in numerical order and don't need to think about how to iterate through all k-element subsets of an n-element set. (You do need to include a special case for when the set under consideration has 0 or 1 element, in which case you'll want to do nothing, or 2 elements, in which case you just copy the value from the matrix.)
Well, it looks simple to me, but perhaps I misunderstand the problem. I would do it like this:
let P be a pattern string in your notation X1.X2. ... .Xn, where Xi is a column in your matrix
first compute the array CS = [ (X1, X2), (X1, X3), ... (X1, Xn) ], which contains all combinations of X1 with every other element in the pattern; CS has n-1 elements, and you can easily build it in O(n)
now you must compute min (CS), i.e. finding the minimum value of the matrix elements corresponding to the combinations in CS; again you can easily find the minimum value in O(n)
done.
Note: since your matrix is symmetric, given P you just need to compute CS by combining the first element of P with all other elements: (X1, Xi) is equal to (Xi, X1)
If your matrix is very large, and you want to do some optimization, you may consider prefixes of P: let me explain with an example
when you have solved the problem for P = X1.X2.X3, store the result in an associative map, where X1.X2.X3 is the key
later on, when you solve a problem P' = X1.X2.X3.X7.X9.X10.X11 you search for the longest prefix of P' in your map: you can do this by starting with P' and removing one component (Xi) at a time from the end until you find a match in your map or you end up with an empty string
if you find a prefix of P' in you map then you already know the solution for that problem, so you just have to find the solution for the problem resulting from combining the first element of the prefix with the suffix, and then compare the two results: in our example the prefix is X1.X2.X3, and so you just have to solve the problem for
X1.X7.X9.X10.X11, and then compare the two values and choose the min (don't forget to update your map with the new pattern P')
if you don't find any prefix, then you must solve the entire problem for P' (and again don't forget to update the map with the result, so that you can reuse it in the future)
This technique is essentially a form of memoization.