Algorithm for pairing players and allocating into courts - algorithm

I am making an algorithm which allocates badminton players into games (2 x 2) in the following way:
Players are divided into pairs.
All possible pair combinations must be done so everyone plays with everyone. If there are 10 players, everyone will belong to 9 pairs.
So far, this is simple to implement.
Then, games should be allocated into 2 courts. This means, 2 games can be going on simultaneously but of course, a player can't be part of two games going on at the same time.
My algorithm idea was:
Create an array containing all possible pairs.
Allocate two pairs from the array into court 1 to play against each other. If the second pair has overlapping members with pair 1, take the next pair from the array. Iterate the array in order and remove allocated pairs from it.
Do the same for court 2 but also make sure that the pairs do not contain overlapping members with players in court 1.
Make a new round and continue from step 2.
This works quite well but as a side effect, on the last rounds, only court 1 will be used because it is impossible to find any more pairs into court 2 fulfilling the condition in step 3. So the capacity of court 2 will be wasted. I am a little unsure if it is even possible to find a perfect solution to this problem. If yes, how the algorithm could be improved?

Related

Divide Up Games into 2 Distinct Sets with no Shared Players

So, I'm working on a personal project that involves machine learning and I want to set up a training dataset and a test dataset such that the training dataset doesn't contaminate the test dataset.
I'm trying to predict winning teams in an online game.
I have 10,000 or so games. Each game has 10 players in it. Some players played in multiple games. I want to divide up the set of 10,000 into two distinct groups such that no player from group 1 played in a game in group 2.
What is an efficient algorithm to do this?
I could just try to brute force it. Come up with 2,000 of the 10,000 games and then check to see, for each game, that no player in the 2,000 games is in the 8,000 remaining games and, if the player is, iterate over the 10,000 games to find one that doesn't share a player with the remaining games.
But, I'm concerned that would take forever.
EDIT: The answer appears to be a breadth first search starting at a game and then getting all the games connected to it by shared players and doing that until the search ends, creating a cluster. Do that until no games are left and you have all the separate clusters.
You can use Disjoint-set data structure to perform this task.
Algorithm:
ds // disjoint set object or data structure
for each round x:
for each player_a in round x:
for each player_b in round x:
ds.union(player_a, player_b)
Simply, perform a union on all the players who played in the same round for each round. This would create disjoint sets where each set represents a group of players who have played strictly among themselves.
After applying the above algorithm,
if you have exactly two disjoint sets then, you can say that players in set 1 did not play with any player in set 2.
If you have only one set then, such partition is not possible.
If more than two disjoint sets are present then, you can group some of the sets together until only 2 sets are remaining.
Considering 10,000 rounds and 10 players in each round, the above algorithm would roughly compute 106 * log2(106) operations which is approximately 2 * 107 operations. This would be easily computed within a second on any modern laptop/pc.
Helpful implementation video

Interactive game using Prolog, can you solve it? :)

I'm working on a task to develop predicates to help play this two player game outlined below.
Problem
A 2 person game is played with a set of identical stones arranged into a number of heaps. There may be any number of stones and any number of heaps. A move in this game consists of either removing any number of stones from one heap, or removing an equal number of stones from each of 2 heaps. The loser of this game is the player who picks up the last stone.
e.g
three heaps of sizes 3,2,1, there are 10 possible moves, leading to the below states:
Take from first heap only: [2,2,1],[1,2,1],[2,1]
Taking from 2nd heap only: [3,1,1],[3,1]
Taking from the 3rd heap only: [3,2]
Taking from the 1st and 2nd heaps: [2,1,1], [1,1]
Taking from the 1st and 3rd heaps: [2,2]
Taking from the 2nd and 3rd heaps: [3,1]
[3,1] occurs twice because there are three different ways of reaching it in one move.
Task1
Create a predicate move (S1,S2) that upon backtracking returns all states S2 reachable from S1 in one move.
So far what we have
change([_|T],T).
change([H|T],[H1|T]):-
between(1,H,W),
H1 is H-W,
W<H.
change([H|T],[H|T1]):-
change(T,T1).
change2([H|T],[H1|T1]):-
between(0,H,W),
H1 is H-W,
W<H,
change(T,T1).
So this produces results that will take any number of stones from any individual list and currently will take a differing number of stones from differing lists. So currently I cant get it to work where you want to take the same number of stones from differing lists.
I.e if I had [1,4,1,3] I'd like to be able to end up with [4,3].
Any help would be really appreciated on this task :).

Finding subsets being used at most k times

Every now and then I read all those conspiracy theories about Lotto-based games being controlled and a computer browsing through the combinations chosen by the players and determining the non-used subset. It got me thinking - how would such algorithm have to work in order to determine such subsets really efficiently? Finding non-used numbers is definitely crossed out as is finding the least used because it's not necesserily providing us with a solution. Also, going deeper, how could an algorithm efficiently choose such a subset that it was used some k times by the players? Saying more formally:
We are given a set of 50 numbers 1 to 50. In the draw 6 numbers are picked.
INPUT: m subsets each consisting of 6 distinct numbers 1 to 50 each,
integer k (0<=k) being the maximum players having all of their 6
numbers correct.
OUTPUT: Subsets which make not more than k players win the jackpot ('winning' means all the numbers they chose were picked in the draw).
Is there any efficient algorithm which could calculate this without using a terrabyte HDD to store all the encounters of every possible 50!/(44!*6!) in the pessimistic case? Honestly, I can't think of any.
If I wanted to run such a conspirancy I would first of all acquire the list of submissions by players. Then I would generate random lottery selections and see how many winners would be produced by each such selection. Then just choose the random lottery selection most attractive to me. There is little point doing anything more sophisticated, because that is probably already powerful enough to be noticed by staticians.
If you want to corrupt the lottery it would probably be easier and safer to select a few competitors you favour and have them win the lottery. In (the book) "1984" I think the state simply announced imaginary lottery winners, with the announcement in each area announcing somebody outside the area. One of the ideas in "The Beckoning Lady" by Margery Allingham is of a gang who attempt to set up a racecourse so they can rig races to allow them to disguise bribes as winnings.
First of all, the total number of combinations (choosing 6 from 50) is not very large. It is about 16 million which can be easily handled.
For each combination keep a count of number of people who played it. While declaring a winner choose the combination that has less than k plays.
If the number within each subset are sorted, then you can treat your subsets as strings - sort them in lexicographical order, then it is easy to count how many players selected each subset (and which subsets were not selected at all). So the time is proportional to the number of players and not the number of numbers in the lottery.

Algorithm to select random pairs, schedule matchups

I'm working in Ruby, but I think this question is best asked agnostic of language. It may be assumed that we have access to basic list/array functions, as well as a "random" number generator. Here's what I'd like to be able to do:
Given a collection of n teams, with n even,
Randomly pair each team with an opponent, such that every team is part of exactly one pair. Call this ROUND 1.
Randomly generate n-2 subsequent rounds (ROUND 2 through ROUND n-1) such that:
Each round has the same property as the first (every team is a
member of one pair), and
After all the rounds, every team has faced every other team exactly once.
I imagine that algorithms for doing exactly this must be well known, but as a self-taught coder I'm having trouble figuring out how to find them.
I belive You are describing a round robin tournament. The wikipedia page gives an algorithm.
If You need a way to randomize the schedule, randomize team order, round order, etc.
Well not sure if this is the most efficient algorithm but:
Randomly assign N teams into two lists of same length n/2 (List1, List2)
Starting with i = 0:
Create pairs: List1[i],List2[i] = a team pair
Repeat for i = 1-> (n/2-1)
For rounds 2-> n/2-1:
Rotate List2, so that the first team in List2 is now at the end.
Repeat steps 2 through 5, until List2 has been cycled once.
This link was very helpful to me the last time I wrote a round robin scheduling algorithm. It includes a C implementation of a first fit algorithm for round robin pairings.
http://www.devenezia.com/downloads/round-robin/
In addition to the algorithm, he has some helpful links to other aspects of tournament scheduling (balancing home and away games, as well as rotating teams across fields/courts).
Note that you don't necessarily want a "random" order to the pairings in all cases. If, for example, you were scheduling a round robin soccer league for 8 games that only had 6 teams, then each team is going to have to play two other teams twice. If you want to make a more enjoyable season for everyone, you have to start worrying about seeding so that you don't have your top 2 teams clobbering the two weakest teams in their last two games. You'd be better off arranging for the extra games to be paired against teams of similar strength/seeding.
Based on info I found through Maniek's link, I went with the following:
A simple round robin algorithm that
a. Starts with pairings achieved by zipping [0,...,(n-1)/2] and [(n-1)/2 + 1,..., n-1]. (So, if n==10, we have 0 paired with 5, 1 with 6, etc.)
b. Rotates all but one team n-2 times clockwise until all teams have played each other. (So in round 2 we pair 1 with 6, 5 with 7, etc.)
Randomly assigns one of [0,..., n-1] to each of the teams.

Probability computation and algorithm for subsequences

Here is a game where cards 1-50 are distributed to two players each having 10 cards which are in random order. Aim is to sort all the cards and whoever does it first is the winner. Every time a person can pick up the card from the deck and he has to replace an existing card. A player can't exchange his cards. i.e only he can replace his card with the card from the deck.A card discarded will go back to deck in random order. Now I need to write a program which does this efficiently.
I have thought of following solution
1) find all the subsequences which are in ascending order in a given set of cards
2) for each subsequence compute a weight based on the probability of the no of ways which can solve the problem.
for ex: If I have a subsequence 48,49,50 at index 2,3,4 probability of completing the problem with this subsequnce is 0. So weight is multiplied by 0 .
Similarly if I have a sequence 18,20,30 at index 3,4,5 then no of possible ways completing the game is 20 possible cards to chose for 6-10 and 17 possible cards to chose for first 2 position ,
3) for each card from the deck, I'll scan through the list and recalculate the weight of the subsequnces to find a better fit.
Well, this solution may have lot of flaws but I wanted to know
1) Given a subsequence , how to find the probability of possible ways to complete the game?
2) What are the best algorithm to find all the subsequences?
So if I understand correctly, the goal is to obtain an ordered hand by exchanging as few cards as possible, right? Have you tried the following approach? It is very simplistic, yet I would guess it has a pretty good performance.
N=50
I=10
while hand is not ordered:
get a card from the deck
v = value of the card
put card in position round(v/N*I)

Resources