My question is: How can I calculate chances of winning when there are too many permutations for a complete go trough.
I'm developing a Poker Texas Hold'Em App. The basic function is a probability calculator. A Deck consists of 52 cards, each player holds two cards, 5 cards are placed on the table and then the winner gets evaluated. So if there are 4 cards on the table, there are 44 possible outcomes for the last card. When there are 3 cards on the table, there are 45*44 permutations. But if there are no cards ont the table, I have to consider 48*47*46*45*44 = 205.476.480 permutations. If I manage to implement the "order doesn't matter"-part perfectly I would have 1.712.304 permutations, which is still about 15 times too much for my CakePHP Webserver.
The key is to consider only the relevant cards.
Instead of creating and evaluating every possible combination of hands, only consider those combinations that actually improve your hand.
An example (not complete)
The hand is 5 of heart + 8 of spades. The following card combinations improve this hand:
Any card above 8
One, two or three 5s
one, two or three 8s
combinations of the above
any value appearing two or three or four times
5 crosses or 5 diamonds
4 hearts or 4 spades
6, 7, 9
4, 6, 7
similar combinations of 4 or 5 cards
6, 7, 8, 9 of hearts
similar combinations with hearts and spades
For each of those, you can calculate the probability fast.
Early in around, you can speed up the computation by ignoring combinations that require more than n matching cards, because they have almost no effect (everybody has a chance of a royal flush in the beginning, but most never get one).
You can even precalculate the values for the starting hands. These are just approx 13 * 13 values (13 options for each card, times 2 for suited or not, divide by two because the order is irrelevant).
Related
I am making an algorithm which allocates badminton players into games (2 x 2) in the following way:
Players are divided into pairs.
All possible pair combinations must be done so everyone plays with everyone. If there are 10 players, everyone will belong to 9 pairs.
So far, this is simple to implement.
Then, games should be allocated into 2 courts. This means, 2 games can be going on simultaneously but of course, a player can't be part of two games going on at the same time.
My algorithm idea was:
Create an array containing all possible pairs.
Allocate two pairs from the array into court 1 to play against each other. If the second pair has overlapping members with pair 1, take the next pair from the array. Iterate the array in order and remove allocated pairs from it.
Do the same for court 2 but also make sure that the pairs do not contain overlapping members with players in court 1.
Make a new round and continue from step 2.
This works quite well but as a side effect, on the last rounds, only court 1 will be used because it is impossible to find any more pairs into court 2 fulfilling the condition in step 3. So the capacity of court 2 will be wasted. I am a little unsure if it is even possible to find a perfect solution to this problem. If yes, how the algorithm could be improved?
I have 5 arenas and 15 players. There are 5 game rounds, and each player should be at 1 arena per round. Every player should pass all 5 arenas after 5 rounds, but the sequence doesn't matter.
This creates 5 groups of 3 players, which shift every round between an arena.
Now, I also want each player to see as many different players as possible.
My solution with 5 groups of 3 players that shift between arenas checks all boxes, except the last.
From my understanding, it is impossible to have unique groups every round whilst also meeting the other requirements, but I'd like to have the best possible solution for this problem.
This is a solution I came up with myself, but as you can see these are just 5 static groups that shift around between the different arenas (player 1, 6 and 11 stay together at all times).
Is there an existing algorithm to solve this problem? Or can someone give me kind-of a flowchart?
I am implementing a chess engine, and my move ordering scheme works as follows
Use pvmove
Use most valuable victim least valuable attacker
Use killer heuristics
Though I don't get why I should be storing only 2 or 3 moves per depth, when I can store a whole list ?
Because you try to store possible future moves.
Therefore you need to store all possible moves for the future.
Assuming you have 32 chess pieces and every piece has 4 possible moves it can do you would need to store :
1 Moves: 128 Possibilities
2 Moves: 16384 Possibilities
3 Moves: 2097152 Possibilities
4 Moves: 268435456 Possibilities
5 Moves: 34359738368 Possibilities
6 Moves: 4398046511104 Possibilities
7 Moves: 562949953421312 Possibilities
8 Moves: 72057594037927936 Possibilities
9 Moves: 9223372036854775808 Possibilities
Thats just a simple assumption of the number of pieces and possible moves, but you see that even for 3 moves ahead you need to save around 2 Million possibilities, which is a lot if you have limited time for each turn.
For sure you could make optimizations and stuff but to answer your question:
Because you neither have the CPU-speed, nor the HDD-space to save a whole list of possible moves.
I have a number of parallel computing problems I am not sure how to do.
To conceptualize the problem, I'll give a real life example. I have a deck of cards (1-9) and I shuffle them. I draw 3 cards, and place the lowest on the table, then I draw another and place the lowest, etc.
I know how to do this serially, but I was wondering if there is a good way to do it in parallel. An example would be:
Problem:
8 2 5 3 9 7 6 1 4
Solution:
2 3 5 7 6 1 4 8 9
I've considered that each number can move forward twice, and back any number of times, but I still can't figure out a parallel method of doing this. Should I just run it all in serial off the first thread, or do it on the CPU?
Thanks!
After considering this for awhile, I decided the fastest solution would be to iterate through each element and perform a swap if the next element was smaller, but to do a second set of swaps in parallel after the first set of swaps, such that it takes just over n+1 or +2 operations to achieve a double swap. Bigger hands could be supported at the cost of only a couple operations.
Here is a game where cards 1-50 are distributed to two players each having 10 cards which are in random order. Aim is to sort all the cards and whoever does it first is the winner. Every time a person can pick up the card from the deck and he has to replace an existing card. A player can't exchange his cards. i.e only he can replace his card with the card from the deck.A card discarded will go back to deck in random order. Now I need to write a program which does this efficiently.
I have thought of following solution
1) find all the subsequences which are in ascending order in a given set of cards
2) for each subsequence compute a weight based on the probability of the no of ways which can solve the problem.
for ex: If I have a subsequence 48,49,50 at index 2,3,4 probability of completing the problem with this subsequnce is 0. So weight is multiplied by 0 .
Similarly if I have a sequence 18,20,30 at index 3,4,5 then no of possible ways completing the game is 20 possible cards to chose for 6-10 and 17 possible cards to chose for first 2 position ,
3) for each card from the deck, I'll scan through the list and recalculate the weight of the subsequnces to find a better fit.
Well, this solution may have lot of flaws but I wanted to know
1) Given a subsequence , how to find the probability of possible ways to complete the game?
2) What are the best algorithm to find all the subsequences?
So if I understand correctly, the goal is to obtain an ordered hand by exchanging as few cards as possible, right? Have you tried the following approach? It is very simplistic, yet I would guess it has a pretty good performance.
N=50
I=10
while hand is not ordered:
get a card from the deck
v = value of the card
put card in position round(v/N*I)