Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last year.
Improve this question
Most online games arbitrarily form teams. Often times its up to the user, and they'll choose a fast server with a free slot. This behavior produces unfair teams and people rage quit. By tracking a player's statics (or any statics that can be gathered) how can you choose teams that are as fair as possible?
One of the more well-known systems now is Microsoft's TrueSkill algorithm.
People have also attempted to adapt the Elo system for team matchmaking, though it's more designed for 1-v-1 pairings.
After my previous answer, I realized that if you wanted to get really fancy you could use a really simple but powerful idea: Markov Chains.
The intuitive idea behind using a Markov Chain goes something like this:
Create a graph G=(V,E)
Let each vertex in V represent an entity
Let each edge in E represent a transitioning probability between entities. This means that the sum of the out degrees of each vertex must be 1.
At the start (time t=0) assign each entity a unit value of 1
At each time step, transition form entity i, j by the transition probability defined in 3.
Let t->infinity then the value of each entity at t=infinity is the equilibrium (that is the chance of a transition into an entity is the same as the total chance of a transition out of an entity.)
This idea has for example been used successfully to implement Google's page rank algorithm. To describe how you can use it consider the following:
V = players E = probability of transitioning form player to player based on relative win/loss ratios
Each player is a vertex.
An edge from player A to B (B is not equal to A) has probability X/N where N is the total number of games played by A and X is the total games lost to B. Add an edge from A to A with probability M/N where M is the total number of games won by A.
Assign a skill level of 1 to each player at the start.
Use the Power Method to find the dominant eigenvector of the link matrix constructed from the probabilities defined in 3.
The dominant eigenvector is the amount of skill each player has at t=infinity, that is
the amount of skill each player has once the markov chain has come to equilibrium. This is a very robust measure of each players skill using the topology of the win/loss space.
Some caveats: there are several problems when applying this directly, the biggest problem will be seperated webs (that is your markov chain will not be irreducible and so the power method will not be guaranteed to converge.) Lucky for you, google has dealt with all these problems and more when implementing their page rank algorithm and all that remains for you is to look up how they circumvent these problems if you are so inclined.
One way would be to simply create a list of players looking for matches at any given time, sorted by player rank. Once you've reached enough people to start a new match (or perhaps, two less than the required), group them as such:
Remove best and worst player and put them on team 1
Remove now-best and now-worst player (really second-best and second worst) and put them on team 2
If there are only two players left, place each one on different teams, depending on who has the lowest combined score. Otherwise, repeat:
Remove now-best and now-worst and put them on team 1
Remove now-best and now-worst and put them on team 2
etc. etc. etc. until your teams are filled.
If you decided to start a new match with less than the required, then here it is time to let the players wait for new people to join. As soon as a new person joins, you're going to want to put them on the open team with the least combined score.
Alternatively, if you wanted to avoid games that combined good and bad players on the same team, you could split up everyone into tiers, (groups based on their ranking) and only match people within the same tier. This would require a new open/sorted list for each extra tier.
Example
Game is 4v4
A - 1000 pnts
B - 800 pnts
C - 600 pnts
D - 400 pnts
E - 200 pnts
F - 100 pnts
As soon as you get these six, group them into teams as such:
Team 1: A, F, D (combined score 1500)
Team 2: B, E, C (combined score 1600)
Now, we wait for two more players to join.
First, player E comes along with 500 pnts. He goes to Team 1, because they have a lower combined score.
Then, player F comes with 800 pnts. He goes to Team 2, because are the only open team left.
Total teams:
Team 1: A, F, D, E (combined score 2000)
Team 2: B, E, C, F (combined score 2400)
Note that the teams were actually pretty fair until the last two came in. To be honest, the best way would be to only create the match when you have enough players to start it. But then the wait times might be too long for the player.
Adjust with how much you need before forming the match. Lower = less wait time, more possibly unfair. Higher = more wait time, less possibly unfair.
If you have a pre-game screen, lower would also offer more time for people to chat and talk with their to-be teammates while waiting.
It is difficult to estimate the skill of any one player by a single metric and such a method is prone to abuse. However, if you only care about implementing something simple that will work well try the following:
keep track of wins and losses
use the percentage of wins vs losses as the statistic to match players ( in some sense of the word match, i.e. group players with similar percentages)
This has the obvious downfall of the case where a player may have a win-loss ratio of 5-0 and another of 50-20, the first has an infinite percentage while the other has a more reasonable percentage. It makes sense for the matching system to acknowledge this and be far more confident that the latter player has actual more skill because of the consistency required; however, pitting the two players against each other would probably be a good thing because the 5-0 player is probably trying to work the system by playing versus weaker players so pitting him against a consistently good player would do everyone good.
Note, I speak from experience from playing only strategy games such as Warcraft 3 where this is the typical match making behaviour. It seems to me like the percentage of wins over losses is a great metric by which to match players.
Match based on multiple attributes. I've implemented a simple matchmaking system using AWS Cloudsearch (based on Apache Solr). For example matching based on the a combination of following fields is possible
{
"fields": {
"elo_rating": 3121.44,
"points": 404,
"randomizer": 35,
"last_login": "2014-10-09T22:57:57Z",
"weapons": [
"CANNON",
"GUN"
]
}
It is now possible to run queries inclusive of multiple fields like the following.
(and (or weapons:'GUN' weapons:'CANNON' weapons:'DRONE')(and last_login:['2013-05-25T00:00:00Z','2014-10-25T00:00:00Z'])(and points:[100, 200])(and elo_rating:[1000, 2000]))}
Related
I am looking for any direction on how to implement the process below, you should not need to understand much at all about poker.
Below is a grid of possible two-card combinations.
Pocket pairs in blue, suited cards in yellow and off-suited in red.
Essentially there is a slider under the matrix which selects a percentage of possible combinations of two cards which a player could be dealt. However, you can see that it moves in a sort of linear fashion, towards the "better" cards.
These selections are also able to be parsed from strings e.g AA-88,AKo-AJo,KQo,AKs-AJs,KQs,QJs,JTs is 8.6% of the matrix.
I've looked around but cannot find questions about the specific selection process. I am not looking for "how to create this grid" or , more like how would I go about the selection process based on the sliding percentage. I am primarily a JavaScript developer but snippets in any language are appreciated, if applicable.
My initial assumptions are that there is some sort of weighting involved i.e. (favoured towards pairs over suited and suited over non-suited) or could it just be predetermined and I'm overthinking this?
In my opinion there should be something along the lines of "grouping(s)" AND "a subsequent weighting" process. It should also be customisable for the user to provide an optimal experience (imo).
For example, if you look at the below:
https://en.wikipedia.org/wiki/Texas_hold_%27em_starting_hands#Sklansky_hand_groups
These are/were standard hand rankings created back in the 1970s/1980s however since then, hand selection has become much more complicated. These kind of groupings have changed a lot in 30 years so poker players will want a custom user experience here.
But lets take a basic preflop scenario.
Combinations:- pairs = 6, suited = 4, nonsuited = 12
1 (AA:6, KK:6, QQ:6, JJ:6, AKs:4) = 28combos
2 (AQs:4, TT:6, AK:16, AJs:4, KQs:4, 99:6) = 40
3 (ATs:4, AQ:16, KJs:4, 88:6, KTs:4, QJs:4) = 38
....
9 (87s:4, QT:12, Q8s:4, 44:6, A9:16, J8s:4, 76s:4, JT:16) = 66
Say for instance we only reraise the top 28/1326 of combinations (in theory there should be some deduction here but for simplicity let's ignore that). We are only 3betting or reraising a very very obvious and small percentage of hands, our holdings are obvious at around 2-4% of total hands. So a player may want to disguise their reraise or 3bet range with say 50% of the weakest hands from group 9. As a basic example.
Different decision trees and game theory can be used with "range building" so a simple ordered list may not be suitable for what you're trying to achieve. depends on your programs purpose.
That said, if you just looking to build an ordered list then you could just take X% of hands that players open with, say average is 27% and run a hand equity calculator simulation tweaking the below GitHub to get different hand rankings. https://github.com/andrewprock/pokerstove
Theres also some lists here at the bottom this page.
http://www.propokertools.com/help/simulator_docs
Be lucky!
I am looking to create a large list of items that allows for easy insertion of new items and for easily changing the position of items within that list. When updating the position of an item, I want to change as few fields as possible regarding the order of items.
After some research, I found that Jira's Lexorank algorithm fulfills all of these needs. Each story in Jira has a 'rank-field' containing a string which is built up of 3 parts: <bucket>|<rank>:<sub-rank>. (I don't know whether these parts have actual names, this is what I will call them for ease of reference)
Examples of valid rank-fields:
0|vmis7l:hl4
0|i000w8:
0|003fhy:zzzzzzzzzzzw68bj
When dragging a card above 0|vmis7l:hl4, the new card will receive rank 0|vmis7l:hl2, which means that only the rank-field for this new card needs to be updated while the entire list can always be sorted on this rank-field. This is rather clever, and I can't imagine that Lexorank is the only algorithm to use this.
Is there a name for this method of sorting used in the sub-rank?
My question is related to the creation of new cards in Jira. Each new card starts with an empty sub-rank, and the rank is always chosen such that the new card is located at the bottom of the list. I've created a bunch of new stories just to see how the rank would change, and it seems that the rank is always incremented by 8 (in base-36).
Does anyone know more specifically how the rank for new cards is generated? Why is it incremented by 8?
I can only imagine that after some time (270 million cards) there are no more ranks to generate, and the system needs to recalculate the rank-field of all cards to make room for additional ranks.
Are there other triggers that require recalculation of all rank-fields?
I suppose the bucket plays a role in this recalculation. I would like to know how?
We are talking about a special kind of indexing here. This is not sorting; it is just preparing items to end up in a certain order in case someone happens to sort them (by whatever sorting algorithm). I know that variants of this kind of indexing have been used in libraries for decades, maybe centuries, to ensure that books belonging together but lacking a common title end up next to each other in the shelves, but I have never heard of a name for it.
The 8 is probably chosen wisely as a compromise, maybe even by analyzing typical use cases. Consider this: If you choose a small increment, e. g. 1, then all tickets will have ranks like [a, b, c, …]. This will be great if you create a lot of tickets (up to 26) in the correct order because then your rank fields keep small (one letter). But as soon as you move a ticket between two other tickets, you will have to add a letter: [a, b] plus a new ticket between them: [a, an, b]. If you expect to have this a lot, you better leave gaps between the ranks: [a, i, q, …], then an additional ticket can get a single letter as well: [a, e, i, q, …]. But of course if you now create lots of tickets in the correct order right in the beginning, you quickly run out of letters: [a, i, q, y, z, za, zi, zq, …]. The 8 probably is a good value which allows for enough gaps between the tickets without increasing the need for many letters too soon. Keep in mind that other scenarios (maybe not Jira tickets which are created manually) might make other values more reasonable.
You are right, the rank fields get recalculated now and then, Lexorank calls this "balancing". Basically, balancing takes place in one of three occasions: ① The ranks are exhausted (largest value reached), ② the ranks are due to user-reranking of tickets too close together ([a, b, i] and something is supposed to go in between a and b), and ③ a balancing is triggered manually in the management page. (Actually, according to the presentation, Lexorank allows for up to three letter ranks, so "too close together" can be something like aaa and aab but the idea is the same.)
The <bucket> part of the rank is increased during balancing, so a messy [0|a, 0|an, 0|b] can become a nice and clean [1|a, 1|i, 1|q] again. The brownbag presentation about Lexorank (as linked by #dandoen in the comments) mentions a round-robin use of <buckets>, so instead of a constant increment (0→1→2→3→…) a 2 is increased modulo 3, so it will turn back to 0 after the 2 (0→1→2→0→…). When comparing the ranks, the sorting algorithm can consider a 0 "greater" than a 2 (it will not be purely lexicographical then, admitted). If now the balancing algorithm works backwards (reorder the last ticket first), this will keep the sorting order intact all the time. (This is just a side aspect, that's why I keep the explanation small, but if this is interesting, ask, and I will elaborate on this.)
Sidenote: Lexorank also keeps track of minimum and maximum values of the ranks. For the functioning of the algorithm itself, this is not necessary.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
This is an algorithmic problem and I'm not sure it has a solution. I think it's a specific case of a more generic computer science problem that has no solution but I'd rather not disclose which one to avoid planting biases. It came up from a real life situation in which mobile phones were out of credit and thus, we didn't have long range communications.
Two groups of people, each with 2 people (but it might be true for N people) arranged to meet at the center of a park but at the time of meeting, the park is closed. Now, they'll have to meet somewhere else around the park. Is there an algorithm each and every single individual could follow to converge all in one point?
For example, if each group splits in two and goes around and when they find another person keep on going with that person, they would all converge on the other side of the park. But if the other group does the same, then, they wouldn't be able to take the found members of the other group with them. This is not a possible solution.
I'm not sure if I explained well enough. I can try to draw a diagram.
Deterministic Solution for N > 1, K > 1
For N groups of K people each.
Since the problem is based on people whose mobile phones are out of credit, let's assume that each person in each group has their own phone. If that's not acceptable, then substitute the phone with a credit card, social security, driver's license, or any other item with numerical identification that is guaranteed to be unique.
In each group, each person must remember the highest number among that group, and the person with the highest number (labeled leader) must travel clockwise around the perimeter while the rest of the group stays put.
After the leader of each group meets the next group, they compare their number with the group's previous leader number.
If the leader's number is higher than the group's previous leader's number, then the leader and the group all continue along the perimeter of the park. If the group's previous leader's number is higher, then they all stay put.
Eventually the leader with the highest number will continue around the entire perimeter exactly 1 rotation, collecting the entire group.
Deterministic solution for N > 1, K = 1 (with one reasonable assumption of knowledge ahead-of-time)
In this case, each group only contains one person. Let's assume that the number used is a phone number, because it is then reasonable to also assume that at least one pair of people will know each other's numbers and so one of them will stay put.
For N = 2, this becomes trivially reduced to one person staying put and the other person going around clockwise.
For other cases, the fact that at least two people will initially know each other's numbers will effectively increase the maximum K to at least 2 (because the person or people who stay put will continue to stay put if the person they know has a higher number than the leader who shows up to meet them), but we still have to introduce one more step to the algorithm to make sure it will terminate.
The extra step is that if a leader has continued around the perimeter for exactly one rotation without adding anyone to the group, then the leader must leave their group behind and start over for one more rotation around the perimeter. This means that a leader with no group will continue indefinitely until they find someone else, which is good.
With this extra step, it is easy to see why we have to assume that at least one pair of people need to know each other's phone numbers ahead of time, because then we can guarantee that the person who stays put will eventually accumulate the entire group.
Feel free to leave comments or suggestions to improve the algorithm I've laid out or challenge me if you think I missed an edge case. If not, then I hope you liked my answer.
Update
For fun, I decided to write a visual demo of my solutions to the problem using d3. Feel free to play around with the parameters and restart the simulation with any initial state. Here's the link:
https://jsfiddle.net/patrob10114/c3d478ty/show/
Key
black - leader
white - follower
when clicked
blue - selected person
green - known by selected person
red - unknown by selected person
Note that collaboration occurs at the start of every step, so if two groups just combined in the current step, most people won't know the people from the opposite group until after the next step is invoked.
They should move towards the northernmost point of the park.
I'd send both groups in a random direction. If they went a half circle without meeting the other group, rerandomize the directions. This will make them meet in a few rounds most of the time, however there is an infinitely small chance that they still never meet.
It is not possible with a deterministic algorithm if
• we have to meet at some point on the perimeter,
• we are unable to distinguish points on the perimeter (or the algorithm is not allowed to use such a distinction),
• we are unable to distinguish individuals in the groups (or the algorithm is not allowed to use such a distinction),
• the perimeter is circular (see below for a more general case),
• we all follow the same algorithm, and
• the initial points may be anywhere on the perimeter.
Proof: With a deterministic algorithm we can deduce the final positions from the initial positions, but the groups could start evenly spaced around the perimeter, in which case the problem has rotational symmetry and so the solution will be unchanged by a 1/n rotation, which however has no fixed point on the perimeter.
Status of assumptions
Dropping various assumptions leads, as others have observed to various solutions:
Non-deterministic: As others have observed, various non-deterministic algorithms do provide a solution whose probability of termination tends to certainty as time tends to infinity; I suspect almost any random walk would do. (Many answers)
Points indistinguishable: Agree on a fixed point at which to meet if needed: flyx’s answer.
Individuals indistinguishable: If there is a perfect hash algorithm, choose those with the lowest hash to collect others: Patrick Roberts’s solution.
Same algorithm: Choose one in advance to collect the others (adapting Patrick Roberts’s solution).
Other assumptions can be weakened:
Non-circular perimeter: The condition that the perimeter be circular is rather artificial, but if the perimeter is topologically equivalent to a circle, this equivalence can be used to convert any solution to a solution to the circle problem.
Unrestricted initial points: Even if the initial points cannot be evenly spaced, as long as some points are distinct, a topological equivalence (as for a non-circular perimeter) reduces a solution to a solution to the circular case, showing that no solution can exist.
I think this question really belongs on Computer Science Stack Exchange.
This question heavily depends on what kind of operations do we have and what do you consider your environment looks like. I asked your this questions with no reply, so here is my interpretation:
The park is a 2d space, 2 groups are located randomly, each group has the same right/left (both are facing the park). Both have the same operations are programmed to do absolutely the same things (nothing like I go right, and you go left, because this makes the problem obvious). So the operations are: Go right/left/stop for x units of time. They can also figure out that they passed through their original position (the one in which they started). And they can be programmed in a loop.
If you have an ability to use randomness - everything is simple. You can come up with many solutions. For example: with probability 0.5 each of them decide to that they will either do 3 steps right and wait. Or one step right and wait. If you will do this operation in a loop and they will select different options, then clearly they will meet (one is faster than the other, so he will reach a slower person). If they both select the same operation, than they will make a circle and both reach their starting positions. In this case roll the dice one more time. After N circles the probability that they will meet will be 1 - 0.5^n (which approaches 1 very fast)
Surprisingly, there is a way to do it! But first we have to define our terms and assumptions.
We have N=2 "teams" of K=2 "agents" apiece. Each "agent" is running the same program. They can't tell north from south, but they can tell clockwise from counterclockwise. Agents in the same place can talk to each other; agents in different places can't.
Your suggested partial answer was: "If each group splits in two and goes around and when they find another person keep on going with that person, they would all converge on the other side of the park..." This implies that our agents have some (magic, axiomatic) face-to-face decision protocol, such that if Alice and Bob are on the same team and wake up at the same point on the circle, they can (magically, axiomatically) decide amongst themselves that Alice will head clockwise and Bob will head counterclockwise (as opposed to Alice and Bob always heading in exactly the same direction because by definition they react exactly the same way to the situation they're identically in).
One way to implement this magic decision protocol is to give each agent a personal random number generator. Whenever 2 or more agents are gathered at a certain point, they all roll a million-sided die, and whichever one rolls highest is acknowledged as the leader. So in your partial solution, Alice and Bob could each roll: whoever rolls higher (the "leader") goes clockwise and sends the other agent (the "follower") counterclockwise.
Okay, having solved the "how do our agents make decisions" issue, let's solve the actual puzzle!
Suppose our teams are (Alice and Bob) and (Carl and Dave). Alice and Carl are the initially elected leaders.
Step 1: Each team rolls a million-sided die to generate a random number. The semantics of this number are "The team with the higher number is the Master Team," but of course neither team knows right now who's got the higher number. But Alice and Bob both know that their number is let's say 424202, and Carl and Dave both know that their number is 373287.
Step 2: Each team sends its leader around the circle clockwise, while the follower stays stationary. Each leader stops moving when he gets to where the other team's follower is waiting. So now at one point on the circle we have Alice and Dave, and at the other point we have Carl and Bob.
Step 3: Alice and Dave compare numbers and realize that Alice's team is the Master Team. Likewise, Bob and Carl compare numbers and realize that Bob's team is the Master Team.
Step 4: Alice being the leader of the Master Team, she takes Dave with her clockwise around the circle. Bob and Carl (being a follower and a leader of a non-master team respectively) just stay put. When Alice and Dave reach Bob and Carl, the problem is solved!
Notice that Step 1 requires that both teams roll a million-sided die in isolation; if during Step 3 everyone realizes that there was a tie, they'll just have to backtrack and try again. Therefore this solution is still probabilistic... but you can make its expected time arbitrarily small by just replacing everyone's million-sided dice with trillion-sided, quintillion-sided, bazillion-sided... dice.
The general strategy here is to impose a pecking order on all N×K agents, and then bounce them around the circle until everyone is aware of the pecking order; then the top pecker can just sweep around the circle and pick everyone up.
Imposing a pecking order can be done by using the agents' personal random number generators.
The protocol for K>2 agents per team is identical to the K=2 case: you just glom all the followers together in Step 1. Alice (the leader) goes clockwise while Bobneric (the followers) stay still; and so on.
The protocol for K=1 agents per team is... well, it's impossible, because no matter what you do, you can't deterministically ensure that anyone will ever encounter another agent. You need a way for the agents to ensure, without communicating at all, that they won't all just circle clockwise around the park forever.
One thing that would help with (but not technically solve) the K=1 case would be to consider the relative speeds of the agents. You might be familiar with Floyd's "Tortoise and Hare" algorithm for finding a loop in a linked list. Well, if the agents are allowed to move at non-identical speeds, then you could certainly do a "continuous, multi-hare" version of that algorithm:
Step 1: Each agent rolls a million-sided die to generate a random number S, and starts running clockwise around the park at speed S.
Step 2: Whenever one agent catches up to another, both agents glom together and start running clockwise at a new random speed.
Step 3: Eventually, assuming that nobody picked exactly the same random speeds, everyone will have met up.
This protocol requires that Alice and Carl not roll identical numbers on their million-sided dice even when they are across the park from each other. IMHO, this is a very different assumption from the other protocol's assuming that Alice and Bob could roll different numbers on their million-sided dice when they were in the same place. With K=1, we're never guaranteed that two agents will ever be in the same place.
Anyway, I hope this helps. The solution for N>2 teams is left as an exercise for the reader, but my intuition is that it'll be easy to reduce the N>2 case to the N=2 case.
Each group sends out a scout with the remaining group members remaining stationary. Each group remembers the name of their scout. The scouts circle around clockwise, and whenever he meets a group, they compare names of their scouts:
If scout's name is earlier alphabetically: group follows him.
If scout's name is later: he joins the group and gives up his initial group identity.
By the time the lowest named scout makes it back the his starting location, everyone who hasn't stopped at his initial location should be following him.
There are some solutions here that to me are unsatisfactory since they require the two teams to agree a strategy in advance and all follow the same deterministic or probabilistic rules. If you had the opportunity to agree in advance what rules you're all going to follow, then as flyx points out you could just have agreed a backup meeting point. Restrictions that prevent the advance choice of a particular place or a particular leader are standard in the context of some problems with computer networks but distinctly un-natural for four friends planning to meet up. Therefore I will frame a strategy from the POV of only one team, assuming that there has been no prior discussion of the scenario between the two teams.
Note that it is not possible to be robust in the face of any strategy from the other team. The other team can always force a stalemate simply by adopting some pattern of movement that ensures those two will never meet again.
One of you sets out walking around the park. The other stands still, let us say at position X. This ensures that: (a) you will meet each other periodically at X, let us say every T seconds; and (b) for each member of the other team, no matter how they move around the perimeter of the park they must encounter at least one of your team at least every T seconds.
Now you have communication among all members of both groups, and (given sufficient time and passing-on of messages from one person to another) the problem resolves to the same problem as if your mobile phones were working. Choosing a leader by random number is one way to solve it as others have suggested. Note that there are still two issues: the first is a two-generals problem with communication, and I suppose you might feel that a mobile phone conversation allows for the generation of common knowledge whereas these relayed notes do not. The second is the possibility that the other team refuses to co-operate and you cannot agree a meeting point no matter what.
Notwithstanding the above problems, the question supposes that if they had communication that the groups would be able to agree a meeting-point. You have communication: agree a meeting point!
As for how to agree a meeting point, I think it requires some appeal to reason or good intention on the part of the other team. If they are due to meet again, then they will be very reluctant to take any action that results in them breaking their commitment to their partner. Therefore suggest to them both that after their next meeting, when all commitments can be forgiven, they proceed together to X by the shortest route. Listen to their counter-proposal and try to find some common solution.
To help reach a solution, you could pre-agree with your team-mate some variations you'd be willing to make to your plan, provided that they remain within some restrictions that ensure you will meet your team-mate again. For example, if the stationary team-mate agrees that they could be persuaded to set out clockwise, and the moving team-mate sets out anti-clockwise and agrees that they can be persuaded to do something different but not to cross point X in a clockwise direction, then you're guaranteed to meet again and so you can accept certain suggestions from the other team.
Just as an example, if a team following this strategy meets a team (unwisely) following your strategy, then one of my team will agree to go along with the one of your team they meet, and the other will refuse (since it would require them to make the forbidden movement above). This means that when your team meet together, they'll have one of my team with them for a group of three. The loose member of my team is on a collision course with that group of three provided your team doesn't do anything perverse.
I think forming any group of three is a win, so each member should do anything they can to attend a meeting of the other team, subject to the constraints they agreed to guarantee they'll meet up with their own team member again. A group of 3, once formed, should follow whatever agreement is in place to meet the loose member (and if the team of two contained within that 3 refuses to do this then they're saboteurs, there is no good reason for them to refuse). Within these restrictions, any kind of symmetry-breaking will allow the team following these principles to persuade/follow the other team into a 3-way and then a 4-way meeting.
In general some symmetry-breaking is required, if only because both teams might be following my strategy and therefore both have a stationary member at different points.
Assume the park is a circle. (for the sake of clarity)
Group A
Person A.1
Person A.2
Group B
Person B.1
Person B.2
We (group A) are currently at the bottom of the circle (90 degrees). We agree to go towards 0 degrees in opposite directions. I'm person A.1 and I go clockwise. I send Person A.2. counterclockwise.
In any possible scenario (B splits, B doesn't split, B has the same scheme, B has some elaborate scheme), each group might have conflicting information. So unless Group A has a gun to force Group B into submission, the new groups might make conflicting choices upon meeting.
Say for instance, A.1. meets B.1, and A.2. meets B.2. What do we (A.1 and B.1) do if B has the same scheme? Since the new groups can't know what the other group decides (whether to go with A's scheme, or B's scheme), each group might make different decision.
And we'll end up where we started... (i.e. two people at 0 degrees, and two people at 90 degrees). Let's call this checkpoint "First Iteration".
We might account for this and say that we'll come up with a scheme for the "Second Iteration". But then the same thing happens again. And for the third iteration, fourth iteration, ad infinitum.
Each iteration has a 50% chance of not working out.
Which means that after x iterations, your chances of not meeting up at a common point are at most 1-(0.5^x)
N.B. I thought about a bunch of scenarios, such as Group A agreeing to come back to their initial point, and communicating with each other what Group B plans to do. But no cigar, turns out even with very clever schemes the conflicting information issue always arises.
An interesting problem indeed. I'd like to suggest my version of the solution:
0 Every group picks a leader.
1: Leader and followers go opposite directions
2: They meet other group leaders or followers
3: They keep going the same direction as before, 90 degrees magnitude
4: By this time, all groups have made a half-circle around the perimeter, and invariably have met leaders again, theirs, or others'.
5: All Leaders change the next step direction to that of the followers around,and order them to follow.
6: Units from all groups meet at one point.
Refer to the attached file for an in-depth explanation. You will need MS Office Powerpoint 2007 or newer to view it. In case you don't have one, use pptx. viewer (Powerpoint viewer) as a free alternative.
Animated Solution (.pptx)
EDIT: I made a typo in the first slide. It reads "Yellow and red are selected", while it must be "Blue and red" instead.
Each group will split in two parts, and each part will go around the circle in the opposite direction (clockwise and counterclockwise).
Before they start, they choose some kind of random number (in a range large enough so that there is no possibility for two groups to have the same number... or a Guid in computer science : globally unique identifier). So one unique number per group.
If people of the same group meet first (the two parts meet), they are alone, so probably the other groups (if any) gave up.
If two groups meet : they follow the rule that say the biggest number leads the way. So when they meet they continue in the direction that had people with the biggest number.
At the end, the direction of the biggest number will lead them all to one point.
If they have no computer to choose this number, each group could use the full names of the people of the group merged together.
Edit : sorry I just see that this is very close to Patrick Roberts' solution
Another edit : what if each group has its own deterministic strategy ?
In the solution above, all works well if all the groups have the same strategy. But in a real life problem this is not the case (as they cant communicate).
If one group has a deterministic strategy and the others have none, they can agree to follow the deterministic approach and all is ok.
But if two groups have deterministic approaches (simply for instance, the same as above, but one group uses the biggest number and the other group follows the lowest number).
Is there a solution to that ?
I have a website built with php/mysql, and I am looking for help in communicating to a Programmer what I want him to do with a Poll/Prediction game that I am trying to create.
For purposes of discussion, assume a game where perhaps 100 players try to predict the top 5 finishers in a Golf Tournament of perhaps 9 Golfers.
I am looking for help in how to create and assign a score based upon the accuracy of prediction.
The players provide a rank ordering using a drag and drop function to order the players from 1 through 5. This ordering has already been coded, and the ranks are stored somehow in the DB (I do not know how).
My initial thinking is to ask the coder to create a script which will assign a score from 1 to 5 for each Golfer that the player nominated to be in the Top 5.
So, a player who predicted perfectly would be awarded a perfect score of 12345.
His first golfer received a 1 for finishing first, second a 2 for finishing second, third golfer receives a 3 for finishing third, and so on.
Anybody less than perfect would have a score higher than 12345.
Players who got the first four positions correct would have to be differentiated on the basis of the finish of their fifth Golfer.
So, one might score 12347 and the other 12348 and the player with the highest score (12348) would be the loser in a matchup of the two players.
A player who did poorly, might have a score of 53419.
Question:
Is this a viable way of creating a score which the players of my game can be ranked upon?
Is it possible to instead simply have something like a Spearman Rank-Order Correlation calculated comparing the Actual Finish Positions with the Predicted Finish Positions for each player,
and then rank players on the basis of the correlation coefficients for their rankings?
Thanks for any help in clarifying how to conceptualize this before approaching a programmer who gets annoyed when I don't really know what I want him to do ahead of time.
It's a quite interesting problem.
It seems that there are three components that need to be considered in the scoring: the number of correct predictions, the order of correct predictions, and the weight of correct predictions.
For example, assume the truth is:
1,5,10,15,20
Here are some predictions:
1,6,7,8,9 : only predicted first one
2,1,10,21,30 : 1 and 10, but the order of 1 is incorrect
20,15,1,5,30 : hit four in the top 5, but the orders are incorrect
It depends on what you value most. You may first check how many in the top 5 the user has predicted, add a value, and then penalize wrong orders. The weight for each position should also be different, this way
1,5,10,15,20 will rank higher than 1,5,10,20,15 and higher than 1,10,5,20,15
Spearman may be working, but I feel it could be too coarse for your purpose.
This is actually a very similar problem that search engines have. EG, in search engine evaluation, the actual outcomes are preferred results provided by humans, and the predicted outcomes are the results delivered by the search engine. In both your task and for search engines, I'd guess you care a lot more about the accuracy of the winner than the accuracy of the 5th place finisher. If that is the case, then the mean average precision is probably a good measure.
I have this problem: I want to know how often a player holding a portfolio of poker hands beats another player holding a different portfolio of poker hands.
Each hand in a portfolio is given a weight (i.e. a likelihood). Each hand in a portfolio also knows it's own "strength". This effectively means all cards have been dealt. So please assume no more cards need to be dealt.
The reason this problem is annoying is because of duplicate card problems. For example, if I pick a random holding from each player's portfolio I must check that these holdings don't share a card -- obviously both plays can't be dealt the same card.
I want to do this quickly so that I can make many different RangeA vs RangeB comparisions per second. I have a solution, but I won't talk about it yet because I don't want to taint any responces.
-- For an Example --
Given a 5 card board of "Ah 3c 8c Td Jh":
HandRangeA = {{"As Ac", 2.5%}, {"As Ad", 2.5%}, {"Ac Kc", 5%}....}
HandRangeB = {{"As Ac", 7.5%}, {"As Ad", 7.5%}, {"Ac Kc", 5%}....}
(Each HandRange contains all possible holding that don't use a "board card")
Goal :: compute the probaility HandRangeA beats HandRangeB
I wrote some software that did this via monte carlo. That means I ran both hands to completion, 1000 times with random boards that could arrive given the situation, and counted wins and losses. It was surprisingly accurate.
Since I was doing it for texas holdem, I would do the same thing after the (1) deal, (2) flop, 3 (turn) so the player could see how their percentages changed given the board.
I really should have finished that software. But I stopped playing poker online....
I think Andrew Prock is considered the expert here; check out the discussion here, and links therein.
I think you want something like this:
probWin = 0
For Each HandA in RangeA
probA = getProbability(HandA)
For Each HandB in RangeB
probB = getConditionalProbability(HandB, HandA)
probWin += probA * probB * getProbabilityADefeatsB(HandA, HandB)
You need to consider conditional probability, because given HandA is As Ac, there is no longer a 7.5% chance that HandB is As Ac (in fact, there is a 0% chance of that). So you are taking the probability of A having a particular hand multiplied by the probability of B having a particular hand, given what A has, multiplied by the probability of A's hand beating B's hand. That should give you the probability of A having that hand against that particular hand of B's and winning. Iterating over all such pairs should give the desired result I think.
Since the approach is exhaustive, there is no need for any sort of Monte Carlo simulation. Of course this will be O(n^2) where n is the number of possible hands, but n here is relatively small.
EDIT:
I should note that since you are referring to the case where all cards have been dealt, the getProbabilityADefeatsB() function would return either 1 or 0. Also, getConditionalProbability() will either be exactly 0 (because the hands share a card) or simply what your regular weight was. It would be more complicated if the hands were less specific (if HandA is AA then HandB could be a different flavor of AA, but it is less likely).