VBScript Poker Game -- What hand do I have? - vbscript

Odd little project I am working on. Before you answer, yes, I know that vbscript is probably the worst language to use for this.
I need help determining what each player has. Each card has a unique number (which I 'translate' into it's poker value with a ♥♦♣♠ next to it). For example:
A♥ = 0
2♥ = 1
3♥ = 2
...
and so on. I need help determining what hand I have. I have thought of a few ways. The first is using the delta between each card value. For example, a straight would be:
n
n +/- (1+ (13 * (0 or 1 or 2 or 3)))
n +/- (2 + (13 * (0 or 1 or 2 or 3 )))
...
and so on. For example cards 3, 3+1+0, 3+2+13, 3+3+(13*3), 3+4+(13*2)
would give me:
4♥ 5♥ 6♦ 7♠ 8♣
My questions is, should I attempt to use regex for this? What is the best way to tell the computer what hand he has without hardcoding every hand?
EDIT: FULL CODE HERE: https://codereview.stackexchange.com/questions/21338/how-to-tell-the-npc-what-hand-it-has

Poker hands all depend on the relative ranks and/or suits of cards.
I suggest writing some utility functions, starting with determining a rank and suit.
So a card in your representation is an int from 0..51. Here are some useful functions (pseudo-code):
// returns rank 0..12, where 0 = Ace, 12 = King
getRank(card) {
return card % 13;
}
// returns suit 0..3, where 0 = Heart, 1 = Diamond, 2 = Club, 3 = Spade
getSuit(card) {
return card / 13; // or floor(card / 13) if lang not using floored division
}
Now that you can obtain the rank and suit of a set of hands you can write some utilities to work with those.
// sort and return the list of cards ordered by rank
orderByRank(cards) {
// ranked = []
// for each card in cards:
// get the rank
// insert into ranked list in correct place
}
// given a ranked set of cards return highest number of identical ranks
getMaxSameRank(ranked) {
duplicates = {} // map / hashtable
for each rank in ranked {
duplicates[rank] += 1
}
return max(duplicates.vals())
}
// count the number of cards of same suit
getSameSuitCount(cards) {
suitCounts = {} // a map or hashtable if possible
// for each card in cards:
// suitCounts{getSuit(card)} += 1
// return max suit count (highest value of suitCounts)
}
You will need some more utility functions, but with these you can now look for a flush or straight:
isFlush(cards) {
if (getSameSuitCount(cards) == 5) {
return true
}
return false
}
isStraight(cards) {
ranked = orderByRank(cards)
return ranked[4] - ranked[0] == 3 && getMaxSameRank(ranked) == 1
}
isStraightFlush(cards) {
return isFlush(cards) && isStraight(cards)
}
And so on.
In general, you will need to check each hand against the possible poker hands, starting with the best, working down to high card. In practice you will need more than that to differentiate ties (two players have a fullhouse, the winner is the player with the higher ranked three of a kind making their fullhouse). So you need to store a bit more information for ranking two hands against one another, such as kickers.
// simplistic version
getHandRanking(cards) {
if (isStraightFlush()) return STRAIGHT_FLUSH
if (isQuads()) return QUADS
...
if (isHighCard) return HIGH_CARD
}
getWinner(handA, handB) {
return max(getHandRanking(handA), getHandRanking(handB))
}
That would be my general approach. There is a wealth of information on poker hand ranking algorithms out there. You might enjoy the Unit 1: Winning Poker Hands from Peter Norvig's Udacity course Design of Computer Programs

Related

Why does n-of show a discontinuity when the size of the reported agentset goes from 2 to 3?

The n-of reporter is one of those reporters making random choices, so we know that if we use the same random-seed we will always get the same agentset out of n-of.
n-of takes two arguments: size and agentset (it can also take lists, but a note on this later). I would expect that it works by throwing a pseudo-random number, using this number to choose an agent from agentset, and repeating this process size times.
If this is true we would expect that, if we test n-of on the same agentset and using the same random-seed, but each time increasing size by 1, every resulting agentset will be the same as in the previous extraction plus a further agent. After all, the sequence of pseudo-random numbers used to pick the first (size - 1) agents was the same as before.
This seems to be confirmed generally. The code below highlights the same patches plus a further one everytime size is increased, as shown by the pictures:
to highlight-patches [n]
clear-all
random-seed 123
resize-world -6 6 -6 6
ask n-of n patches [
set pcolor yellow
]
ask patch 0 0 [
set plabel word "n = " n
]
end
But there is an exception: the same does not happen when size goes from 2 to 3. As shown by the pictures below, n-of seems to follow the usual behaviour when starting from a size of 1, but the agentset suddenly changes when size reaches 3 (becoming the agentset of the figures above - which, as far as I can tell, does not change anymore):
What is going on there behind the scenes of n-of, that causes this change at this seemingly-unexplicable threshold?
In particular, this seems to be the case only for n-of. In fact, using a combination of repeat and one-of doesn't show this discontinuity (or at least as far as I've seen):
to highlight-patches-with-repeat [n]
clear-all
random-seed 123
resize-world -6 6 -6 6
repeat n [
ask one-of patches [
set pcolor yellow
]
]
ask patch 0 0 [
set plabel word "n = " n
]
end
Note that this comparison is not influenced by the fact that n-of guarantees the absence of repetitions while repeat + one-of may have repetitions (in my example above the first repetition happens when size reaches 13). The relevant aspect simply is that the reported agentset of size x is consistent with the reported agentset of size x + 1.
On using n-of on lists instead of agentsets
Doing the same on a list results in always different numbers being extracted, i.e. the additional extraction does not equal the previous extraction with the addition of a further number. While this looks to me as a counter-intuitive behaviour from the point of view of expecting always the same items to be extracted from a list if the extraction is based on always the same sequence of pseudo-random numbers, at least it looks to happen consistently and therefore it does not look to me as ambiguous behaviour as in the case of agentsets.
So let's find out how this works together. Let's start by checking the primitive implementation itself. It lives here. Here is the relevant bit with error handling and comments chopped out for brevity:
if (obj instanceof LogoList) {
LogoList list = (LogoList) obj;
if (n == list.size()) {
return list;
}
return list.randomSubset(n, context.job.random);
} else if (obj instanceof AgentSet) {
AgentSet agents = (AgentSet) obj;
int count = agents.count();
return agents.randomSubset(n, count, context.job.random);
}
So we need to investigate the implementations of randomSubset() for lists and agentsets. I'll start with agentsets.
The implementation lives here. And the relevant bits:
val array: Array[Agent] =
resultSize match {
case 0 =>
Array()
case 1 =>
Array(randomOne(precomputedCount, rng.nextInt(precomputedCount)))
case 2 =>
val (smallRan, bigRan) = {
val r1 = rng.nextInt(precomputedCount)
val r2 = rng.nextInt(precomputedCount - 1)
if (r2 >= r1) (r1, r2 + 1) else (r2, r1)
}
randomTwo(precomputedCount, smallRan, bigRan)
case _ =>
randomSubsetGeneral(resultSize, precomputedCount, rng)
}
So there you go. We can see that there is a special case when the resultSize is 2. It auto-generates 2 random numbers, and flips them to make sure they won't "overflow" the possible choices. The comment on the randomTwo() implementation clarifies that this is done as an optimization. There is similarly a special case for 1, but that's just one-of.
Okay, so now let's check lists. Looks like it's implementation of randomSubset() lives over here. Here is the snippit:
def randomSubset(n: Int, rng: Random): LogoList = {
val builder = new VectorBuilder[AnyRef]
var i = 0
var j = 0
while (j < n && i < size) {
if (rng.nextInt(size - i) < n - j) {
builder += this(i)
j += 1
}
i += 1
}
LogoList.fromVector(builder.result)
}
The code is a little obtuse, but for each element in the list it's randomly adding it to the resulting subset or not. If early items aren't added, the odds for later items go up (to 100% if need be). So changing the overall size of the list changes the numbers that will be generated in the sequence: rng.nextInt(size - i). That would explain why you don't see the same items selected in order when using the same seed but a larger list.
Elaboration
Okay, so let's elaborate on the n = 2 optimization for agentsets. There are a few things we have to know to explain this:
What does the non-optimized code do?
The non-optimized agentset code looks a lot like the list code I already discussed - it iterates each item in the agentset and randomly decides to add it to the result or not:
val iter = iterator
var i, j = 0
while (j < resultSize) {
val next = iter.next()
if (random.nextInt(precomputedCount - i) < resultSize - j) {
result(j) = next
j += 1
}
i += 1
}
Note that this code, for each item in the agentset will perform a couple of arithmetic operations, precomputedCount - i and resultSize - j as well as the final < comparison and the increments for j and i abd the j < resultSize check for the while loop. It also generates a random number for each checked element (an expensive operation) and calls next() to move our agent iterator forward. If it fills the result set before processing all elements of the agentset it will terminate "early" and save some of the work, but in the worst case scenario it is possible it'll perform all those operations for each element in the agentset when winds up needing the last agent to completely "fill" the results.
What does the optimized code do and why is it better??
So now let's check the optimized code n = 2 code:
if (!kind.mortal)
Array(
array(smallRandom),
array(bigRandom))
else {
val it = iterator
var i = 0
// skip to the first random place
while(i < smallRandom) {
it.next()
i += 1
}
val first = it.next()
i += 1
while (i < bigRandom) {
it.next()
i += 1
}
val second = it.next()
Array(first, second)
}
First, the check for kind.mortal at the start is basically checking if this is a patch agentset or not. Patches never die, so it's safe to assume all agents in the agentset are alive and you can just return the agents found in the backing array at the two provided random numbers as the result.
So on to the second bit. Here we have to use the iterator to get the agents from the set, because some of them might be dead (turtles or links). The iterator will skip over those for us as we call next() to get the next agent. You see the operations here are doing the while checks as it increments i up through the desired random numbers. So here the work is the increments for the indexer, i, as well as the checks for the while() loops. We also have to call next() to move the iterator forward. This works because we know smallRandom is smaller than bigRandom - we're just skipping through the agents and plucking out the ones we want.
Compared the non-optimized version we've avoided generator many of the random numbers, we avoid having an extra variable to track the result set count, and we avoid the math and less-than check to determine memebership in the result set. That's not bad (especially the RNG operations).
What would the impact be? Well if you have a large agentset, say 1000 agents, and you are picking 2 of them, the odds of picking any one agent are small (starting at 1/1000, in fact). That means you will run all that code for a long time before getting your 2 resulting agents.
So why not optimize for n-of 3, or 4, or 5, etc? Well, let's look back at the code to run the optimized version:
case 2 =>
val (smallRan, bigRan) = {
val r1 = rng.nextInt(precomputedCount)
val r2 = rng.nextInt(precomputedCount - 1)
if (r2 >= r1) (r1, r2 + 1) else (r2, r1)
}
randomTwo(precomputedCount, smallRan, bigRan)
That little logic at the end if (r2 >= r1) (r1, r2 + 1) else (r2, r1) makes sure that smallRan < bigRan; that is strictly less than, not equal. That logic gets much more complex when you need to generate 3, 4, or 5+ random numbers. None of them can be the same, and they all have to be in order. There are ways to quickly sort lists of numbers which might work, but generating random numbers without repetition is much harder.

Checking the validity of a pyramid of dominoes

I came across this question in a coding interview and couldn't figure out a good solution.
You are given 6 dominoes. A domino has 2 halves each with a number of spots. You are building a 3-level pyramid of dominoes. The bottom level has 3 dominoes, the middle level has 2, and the top has 1.
The arrangement is such that each level is positioned over the center of the level below it. Here is a visual:
[ 3 | 4 ]
[ 2 | 3 ] [ 4 | 5 ]
[ 1 | 2 ][ 3 | 4 ][ 5 | 6 ]
The pyramid must be set up such that the number of spots on each domino half should be the same as the number on the half beneath it. This doesn't apply to neighboring dominoes on the same level.
Is it possible to build a pyramid from 6 dominoes in the arrangement described above? Dominoes can be freely arranged and rotated.
Write a function that takes an array of 12 ints (such that arr[0], arr[1] are the first domino, arr[2], arr[3] are the second domino, etc.) and return "YES" or "NO" if it is possible or not to create a pyramid with the given 6 dominoes.
Thank you.
You can do better than brute-forcing. I don't have the time for a complete answer. So this is more like a hint.
Count the number of occurrences of each number. It should be at least 3 for at least two numbers and so on. If these conditions are not met, there is no solution. In the next steps, you need to consider the positioning of numbers on the tiles.
Just iterate every permutation and check each one. If you find a solution, then you can stop and return "YES". If you get through all permutations then return "NO". There are 6 positions and each domino has 2 rotations, so a total of 12*10*8*6*4*2 = 46080 permutations. Half of these are mirrors of each other so we're doubling our necessary workload, but I don't think that's going to trouble the user. I'd fix the domino orientations, then iterate through all the position permutations, then iterate the orientation permutations and repeat.
So I'd present the algorithm as:
For each permutation of domino orientations
For each permutation of domino positions
if arr[0] == arr[3] && arr[1] == arr[4] && arr[2] == arr[7] && arr[3] == arr[8] && arr[4] == arr[9] && && arr[5] == arr[10] then return "YES"
return "NO"
At that point I'd ask the interviewer where they wanted to go from there. We could look at optimisations, equivalences, implementations or move on to something else.
We can formulate a recursive solution:
valid_row:
if row_index < N - 1:
copy of row must exist two rows below
if row_index > 2:
matching left and right must exist
on the row above, around a center
of size N - 3, together forming
a valid_row
if row_index == N - 1:
additional matching below must
exist for the last number on each side
One way to solve it could be backtracking while tracking chosen dominoes along the path. Given the constraints on matching, a six domino pyramid ought to go pretty quick.
Before I start... There is an ambiguity in the question, which may be what the interviewer was more interested than the answer. This would appear to be a question asking for a method to validate one particular arrangement of the values, except for the bit which says "Is it possible to build a pyramid from 6 dominoes in the arrangement described above? Dominoes can be freely arranged and rotated." which implies that they might want you to also move the dominoes around to find a solution. I'm going to ignore that, and stick with the simple validation of whether it is a valid arrangement. (If it is required, I'd split the array into pairs, and then brute force the permutations of the possible arrangements against this code to find the first one that is valid.)
I've selected C# as a language for my solution, but I have intentionally avoided any language features which might make this more readable to a C# person, or perform faster, since the question is not language-specific, so I wanted this to be readable/convertible for people who prefer other languages. That's also the reason why I've used lots of named variables.
Basically check that each row is duplicated in the row below (offset by one), and stop when you reach the last row.
The algorithm drops out as soon as it finds a failure. This algorithm is extensible to larger pyramids; but does no validation of the size of the input array: it will work if the array is sensible.
using System;
public static void Main()
{
int[] values = new int[] { 3, 4, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6 };
bool result = IsDominoPyramidValid(values);
Console.WriteLine(result ? "YES" : "NO");
}
private const int DominoLength = 2;
public static bool IsDominoPyramidValid(int[] values)
{
int arrayLength = values.Length;
int offset = 0;
int currentRow = 1; // Note: I'm using a 1-based value here as it helps the maths
bool result = true;
while (result)
{
int currentRowLength = currentRow * DominoLength;
// Avoid checking final row: there is no row below it
if (offset + currentRowLength >= arrayLength)
{
break;
}
result = CheckValuesOnRowAgainstRowBelow(values, offset, currentRowLength);
offset += currentRowLength;
currentRow++;
}
return result;
}
private static bool CheckValuesOnRowAgainstRowBelow(int[] values, int startOfCurrentRow, int currentRowLength)
{
int startOfNextRow = startOfCurrentRow + currentRowLength;
int comparablePointOnNextRow = startOfNextRow + 1;
for (int i = 0; i < currentRowLength; i++)
{
if (values[startOfCurrentRow + i] != values[comparablePointOnNextRow + i])
{
return false;
}
}
return true;
}

How to compute blot exposure in backgammon efficiently

I am trying to implement an algorithm for backgammon similar to td-gammon as described here.
As described in the paper, the initial version of td-gammon used only the raw board encoding in the feature space which created a good playing agent, but to get a world-class agent you need to add some pre-computed features associated with good play. One of the most important features turns out to be the blot exposure.
Blot exposure is defined here as:
For a given blot, the number of rolls out of 36 which would allow the opponent to hit the blot. The total blot exposure is the number of rolls out of 36 which would allow the opponent to hit any blot. Blot exposure depends on: (a) the locations of all enemy men in front of the blot; (b) the number and location of blocking points between the blot and the enemy men and (c) the number of enemy men on the bar, and the rolls which allow them to re-enter the board, since men on the bar must re-enter before blots can be hit.
I have tried various approaches to compute this feature efficiently but my computation is still too slow and I am not sure how to speed it up.
Keep in mind that the td-gammon approach evaluates every possible board position for a given dice roll, so each turn for every players dice roll you would need to calculate this feature for every possible board position.
Some rough numbers: assuming there are approximately 30 board position per turn and an average game lasts 50 turns we get that to run 1,000,000 game simulations takes: (x * 30 * 50 * 1,000,000) / (1000 * 60 * 60 * 24) days where x is the number of milliseconds to compute the feature. Putting x = 0.7 we get approximately 12 days to simulate 1,000,000 games.
I don't really know if that's reasonable timing but I feel there must be a significantly faster approach.
So here's what I've tried:
Approach 1 (By dice roll)
For every one of the 21 possible dice rolls, recursively check to see a hit occurs. Here's the main workhorse for this procedure:
private bool HitBlot(int[] dieValues, Checker.Color checkerColor, ref int depth)
{
Moves legalMovesOfDie = new Moves();
if (depth < dieValues.Length)
{
legalMovesOfDie = LegalMovesOfDie(dieValues[depth], checkerColor);
}
if (depth == dieValues.Length || legalMovesOfDie.Count == 0)
{
return false;
}
bool hitBlot = false;
foreach (Move m in legalMovesOfDie.List)
{
if (m.HitChecker == true)
{
return true;
}
board.ApplyMove(m);
depth++;
hitBlot = HitBlot(dieValues, checkerColor, ref depth);
board.UnapplyMove(m);
depth--;
if (hitBlot == true)
{
break;
}
}
return hitBlot;
}
What this function does is take as input an array of dice values (i.e. if the player rolls 1,1 the array would be [1,1,1,1]. The function then recursively checks to see if there is a hit and if so exits with true. The function LegalMovesOfDie computes the legal moves for that particular die value.
Approach 2 (By blot)
With this approach I first find all the blots and then for each blot I loop though every possible dice value and see if a hit occurs. The function is optimized so that once a dice value registers a hit I don't use it again for the next blot. It is also optimized to only consider moves that are in front of the blot. My code:
public int BlotExposure2(Checker.Color checkerColor)
{
if (DegreeOfContact() == 0 || CountBlots(checkerColor) == 0)
{
return 0;
}
List<Dice> unusedDice = Dice.GetAllDice();
List<int> blotPositions = BlotPositions(checkerColor);
int count = 0;
for(int i =0;i<blotPositions.Count;i++)
{
int blotPosition = blotPositions[i];
for (int j =unusedDice.Count-1; j>= 0;j--)
{
Dice dice = unusedDice[j];
Transitions transitions = new Transitions(this, dice);
bool hitBlot = transitions.HitBlot2(checkerColor, blotPosition);
if(hitBlot==true)
{
unusedDice.Remove(dice);
if (dice.ValuesEqual())
{
count = count + 1;
}
else
{
count = count + 2;
}
}
}
}
return count;
}
The method transitions.HitBlot2 takes a blotPosition parameter which ensures that only moves considered are those that are in front of the blot.
Both of these implementations were very slow and when I used a profiler I discovered that the recursion was the cause, so I then tried refactoring these as follows:
To use for loops instead of recursion (ugly code but it's much faster)
To use parallel.foreach so that instead of checking 1 dice value at a time I check these in parallel.
Here are the average timing results of my runs for 50000 computations of the feature (note the timings for each approach was done of the same data):
Approach 1 using recursion: 2.28 ms per computation
Approach 2 using recursion: 1.1 ms per computation
Approach 1 using for loops: 1.02 ms per computation
Approach 2 using for loops: 0.57 ms per computation
Approach 1 using parallel.foreach: 0.75 ms per computation
6 Approach 2 using parallel.foreach: 0.75 ms per computation
I've found the timings to be quite volatile (Maybe dependent on the random initialization of the neural network weights) but around 0.7 ms seems achievable which if you recall leads to 12 days of training for 1,000,000 games.
My questions are: Does anyone know if this is reasonable? Is there a faster algorithm I am not aware of that can reduce training?
One last piece of info: I'm running on a fairly new machine. Intel Cote (TM) i7-5500U CPU #2.40 GHz.
Any more info required please let me know and I will provide.
Thanks,
Ofir
Yes, calculating these features makes really hairy code. Look at the GNU Backgammon code. find the eval.c and look at the lines for 1008 to 1267. Yes, it's 260 lines of code. That code calculates what the number of rolls that hits at least one checker, and also the number of rolls that hits at least 2 checkers. As you see, the code is hairy.
If you find a better way to calculate this, please post your results. To improve I think you have to look at the board representation. Can you represent the board in a different way that makes this calculation faster?

Getting the least common denominator of two decimals

I am currently working on a text based web game, where in I simulate the battle sequences automatically like MyBrute and Pockie Ninja
So this is the situation.
We have 2 Players with different attack speed
attack speed(determines the number of seconds needed for a player to start attacking)
(Easy Example) Lets assume Player 1 has 6s and Player 2 has 3s
This means Player 2 will attack twice before Player 1 does
(its because if two player tied on a attack turn, the one with the better attack speed goes first)
(but if they have the same attack speed, the player who have not attack lately will go)
Now my problem is in the loop.
I'd like to determine who's turn it is with the minimum number of loops
for our Easy Example we could just create an infinite loop with a counter that increments 3 values to determine whos turn it's going to be and just check if every iteration if we have a winner and exit the loop. (this is my algo you can suggest better one)
The big problem for me is when i have decimal values now for attack speed
Realistic Example (assume that i only use 1 digit for decimal)
Player1 attack speed = 5.7
Player2 attack speed = 6.6
at worst we could have is 0.1 as a an LCD and use as subtrahend per loop but i want to determine the the best subtrahend(LCD) value.
Hope it makes sense.
Thank you. I appreciate you sharing your great minds.
UPDATE
//THIS IS NOT THE ACTUAL CODES BUT THIS IS THE LOGIC
decimal Player1Turn = Player1.attackspeed;
decimal Player2Turn = Player2.attackspeed;
decimal LCD = GetLCD(Player1.attackspeed,Player2.attackspeed) ***//THIS IS WHAT I WANT TO DETERMINE***
while (Player1.HP >0 && Player2.HP >0)
{
Player1Turn -= LCD;
Player2Turn -= LCD;
if (Player1Turn<=0)
{
//DO STUFF
Player1Turn = Player1.attackspeed;
}
if (Player2Turn<=0)
{
//DO STUFF
Player2Turn = Player2.attackspeed;
}
}
WE CAN USE A FUNCTION LIKE
public decimal GetLCD(decimal num1, decimal num2)
{
//returns the lcd
}
The following code processes the battle sequence without using the lowest common denominator. It will also run about 1 million times faster than all possible attempts with using the lowest common denominator for player attack speeds equal e.g. 1000 and 1000.001 respectively.
decimal time = 0;
while (player1.HP > 0 && player2.HP > 0) {
decimal player1remainingtime = player1.attackspeed - (time % player1.attackspeed);
decimal player2remainingtime = player2.attackspeed - (time % player2.attackspeed);
time += Math.Min(player1remainingtime, player2remainingtime);
if(player1remainingtime < player2remainingtime) {
//it is player 1 turn; do stuff;
} else if(player1remainingtime > player2remainingtime) {
//it is player 2 turn; do stuff;
} else {
//both player turns now
if(player1.attackspeed < player2.attackspeed) {
//player 1 is faster, its player 1 turn; do stuff
//now do stuff for player 2
} else {
//player 2 is faster, its player 2 turn; do stuff
//now do stuff for player 1
}
}
}
If you are using an object oriented language then you can do this:
Players will be objects of type Player and there will be a Timer object.
The Timer will use the Observer design pattern.
Players will register themselves to the Timer with their response time.
When their time is due then they are notified that they can take action.

Known algorithm for efficiently distributing items and satisfying minima?

For the following problem I'm wondering if there is a known algorithm already as I don't want to reinvent the wheel.
In this case it's about hotel rooms, but I think that is rather irrelevant:
name | max guests | min guests
1p | 1 | 1
2p | 2 | 2
3p | 3 | 2
4p | 4 | 3
I'm trying to distribute a certain amount of guests over available rooms, but the distribution has to satisfy the 'min guests' criteria of the rooms. Also, the rooms need to be used as efficiently as possible.
Let's take 7 guests for example. I wouldn't want this combination:
3 x 3p ( 1 x 3 guests, 2 x 2 guests )
.. this would satisfy the minimum criteria, but would be inefficient. Rather I'm looking for combinations such as:
1 x 3p and 1 x 4p
3 x 2p and 1 x 1p
etc...
I would think this is a familiar problem. Is there any known algorithm already to solve this problem?
To clarify:
By efficient I mean, distribute guests in such a way that rooms are filled up as much as possible (guests preferences are of secondary concern here, and are not important for the algorithm I'm looking for).
I do want all permutations that satisfy this efficiency criteria though. So in above example 7 x 1p would be fine as well.
So in summary:
Is there a known algorithm that is able to distribute items as efficiently as possible over slots with a min and max capacity, always satisfying the min criteria and trying to satisfy the max criteria as much as possible.
You need to use dynamic programming, define a cost function, and try to fit people in possible rooms having a cost function as small as possible.
Your cost function can be something like :
Sum of vacancy in rooms + number of rooms
It can be a bit similar to the least rageness problem : Word wrap to X lines instead of maximum width (Least raggedness)
You fit people in room, as you fit words in line.
The constraints are the vacancies in the rooms instead of being the length of the lines. (infinite cost if you don't fullfil the constraints)
and the recursion relation is pretty much the same .
Hope it helps
$sql = "SELECT *
FROM rooms
WHERE min_guests <= [$num_of_guests]
ORDER BY max_guests DESC
LIMIT [$num_of_guests]";
$query = $this->db->query($sql);
$remaining_guests = $num_of_guests;
$rooms = array();
$filled= false;
foreach($query->result() as $row)
{
if(!$filled)
{
$rooms[] = $row;
$remaining_guests -= $row->max_guests;
if(remaining_guests <= 0)
{
$filled = true;
break;
}
}
}
Recursive function:
public function getRoomsForNumberOfGuests($number)
{
$sql = "SELECT *
FROM rooms
WHERE min_guests <= $number
ORDER BY max_guests DESC
LIMIT 1";
$query = $this->db->query($sql);
$remaining_guests = $number;
$rooms = array();
foreach($query->result() as $row)
{
$rooms[] = $row;
$remaining_guests -= $row->max_guests;
if($remaining_guests > 0)
{
$rooms = array_merge($this->getRoomsForNumberOfGuests($remaining_guests), $rooms);
}
}
return $rooms;
}
Would something like this work for ya? Not sure what language your in?
For efficint = minimum rooms used, perhaps this would work. To minimise the number of rooms used you want to put max guests in the large rooms.
So sort the rooms in descending order of max guests, then allocate guests to them in that order, placing max guests in each room in turn. Try to place all remaining guests is any remaining room that will accept that many min guests; if that is impossible, back-track and try again. When back tracking, hold back the room with the smallest min guests. Held back rooms are not allocated guests in the max guests phase.
EDIT
As Ricky Bobby pointed out, this does not work as such, because of the difficulty of the back-tracking. I'm keeping this answer for now, more as a warning than as a suggestion :-)
This can be done as a fairly straightforward modification of the recursive algorithm to enumerate integer partitions. In Python:
import collections
RoomType = collections.namedtuple("RoomType", ("name", "max_guests", "min_guests"))
def room_combinations(count, room_types, partial_combo=()):
if count == 0:
yield partial_combo
elif room_types and count > 0:
room = room_types[0]
for guests in range(room.min_guests, room.max_guests + 1):
yield from room_combinations(
count - guests, room_types, partial_combo + ((room.name, guests),)
)
yield from room_combinations(count, room_types[1:], partial_combo)
for combo in room_combinations(
7,
[
RoomType("1p", 1, 1),
RoomType("2p", 2, 2),
RoomType("3p", 3, 2),
RoomType("4p", 4, 3),
],
):
print(combo)
we should select a list of room (maybe more than one time for each room) in a way that sum of the min value of selected room get equal or smaller than the number of guests and sum of the max value get equal or bigger than the number of guests.
I defined cost as total free space in selected rooms. (cost = max - min)
We can check for the answer with this code and find all possible combinations with the minimum cost. (c# code)
class FindRooms
{
// input
int numberOfGuest = 7; // your goal
List<Room> rooms;
// output
int cost = -1; // total free space in rooms.
List<String> options; // list of possible combinations for the best cost
private void solve()
{
// fill rooms data
// fill numberOfGuest
// run this function to find the answer
addMoreRoom("", 0, 0);
// cost and options are ready to use
}
// this function add room to the list recursively
private void addMoreRoom(String selectedRooms, int minPerson, int maxPerson)
{
// check is it acceptable
if (minPerson <= numberOfGuest && maxPerson >= numberOfGuest)
{
// check is it better than or equal to previous result
if(maxPerson - minPerson == cost)
{
options.Add(selectedRooms);
}
else if (maxPerson - minPerson < cost)
{
cost = maxPerson - minPerson;
options.Clear();
options.Add(selectedRooms);
}
}
// check if too many room selected
if (minPerson > numberOfGuest)
return;
// add more room recursively
foreach (Room room in rooms)
{
// add room and min and max space to current state and check
addMoreRoom(selectedRooms + "," + room, minPerson + room.min, maxPerson + room.max);
}
}
public class Room
{
public String name;
public int min;
public int max;
}
}

Resources