can player join multiple rooms? PUN - animation

In Photon multiplayer for unity,
can player join multiple room or sync player movement throughout multiple rooms in photon multiplayer game ?

This question is answered on Photon's forum at this link.
PUN and Photon realtime clients, in general, can be joined to a single room at the same time.

Related

Can you isolate audio data given two mics?

So I had an idea for a program, but I really have no idea if it would work, and I didn't really know where to post this question. I used which computer science programming stack exchange sites do I post on in order to determine, that since this is an algorithmic question (can this algorithm be written), I thought this would be the best place.
So the idea is this. Imagine you are sitting at a table with someone else each of you have a mic, as if you're doing a podcast or something. Your voice is being heard by your mic, and the other mic, albeit faintly.
The question is, can you use the distance between the two mics to act as a filter on your voice only?
Your voice is heard by the mic in front of you first, then by the mic further away from you at a very specific interval, since they are stationary. Any audio elsewhere in the room reaches the two mics at different interval signatures, for instance, if at a perpendicular angle the audio would reach the two mics at the same time.
It seems to me that this relationship could be used to isolate audio coming from a certain angle and filter out all the rest. Is this possible? Is it done already?

Auto-match criteria by location

How does Google match two players looking for a random game?
For our app it is better to match people from the same location.
Is Google already preferring people living closer together when matching them?
If not: Is there a better approach to match players than seperating the world by longitudes and latitudes, generating a unique id per square and use setVariant(id)?
When players initiate a multiplayer game, they can choose to invite specific people or have Google Play games services automatically select other participants randomly via auto-matching. They can also request a mix of the two (for example, one specific player from their circles, and two auto-matched players).
If players choose to invite a specific person to a game, depending on their account settings, the UI they use, and the game’s scopes, they may have different people available to choose from.
In general, inviters will always be able to invite recent opponents, and will always be able to automatch players.
In the Google-supplied default UI, they will also see the “Nearby players” option.
In the Google-supplied default UI, players with discoverable profiles will see circled friends as well, independent of whether they have the PLUS_LOGIN scope.
Using the getInvititablePlayers() method, games that have requested the PLUS_LOGIN scope will get circled friends in the return values, independent of whether they have a discoverable profile.
An auto-match participant does not have to be a contact in the local player's circles or any other connection. When auto-matching, Google Play games services simply looks for other participants at that time who are also initiating a game and requesting to be auto-matched. Auto-match participants do not receive notifications to join a game; from their perspective, it appears as though they are individually initiating the game.
For more information, you can check the Real-time Multiplayer.
When using Turn-based Multiplayer, the participants in a match can fall into one of these categories:
Initiating player
Auto-matched players
Invited players

Elo rating system without order of game played

I am looking for a rating system similar to the elo rating system in chess.
The problem I have is that the ELO system depends on the order games were played.
eg.
Player A starting Elo 1000
Player B starting Elo 1000
If Player B wins over A he will have lets say 1015 points and A 985.
If A keeps on playing and wins against other people, he will have a higher ranking than B, if B stops playing.
I don't want that. B should still be stronger than A.
How can I realise that?
From this link:
Whole-History Rating (WHR) is a new method to estimate the
time-varying strengths of players involved in paired comparisons. Like
many variations of the Elo rating system, the whole-history approach
is based on the dynamic Bradley-Terry model. But, instead of using
incremental approximations, WHR directly computes the exact maximum a
posteriori over the whole rating history of all players.
It's a rating system without order of game played, like the one you asked, but it doesn't solve your issue with Elo.
On the other hand, many post-Elo ranking systems, as Glicko (for chess), or TrueSkill (for X-box games), or rankade (our multipurpose ranking system) have some 'activity dynamics feature' to avoid 'parking the bus' approach (a player gets a high level in ranking, then he stops playing), indeed.
There are a number of schemes which amount to writing down the win/lose/draw record as a matrix and then typically calculating the largest eigenvalue of some matrix related to this. One summary is at http://java.dzone.com/articles/ranking-systems-what-ive, which points to more technical papers including https://umdrive.memphis.edu/ccrousse/public/MATH%207375/PERRON.pdf - "The Perron-Frobenius Theorem and the Ranking of Football Teams".
If you can get more information out of the game than just win/lose/draw you might do better by using this. Some work on soccer has used the number of goals for and against at each match to try and work out the strengths of each team's offense and defense separately (and I do realise that soccer doesn't have separate offensive and defensive teams). In soccer it is reasonable to model the number of goals scored as a Poisson process. One deduction from this, by the way, is that soccer is inherently a pretty uncertain game, and that predicting score draws, as required in some gambles, is especially uncertain. I try and remember the inevitable uncertainty every time England play a game :-).

algorithm scheduling, round robin tournament with multi teams / game

The round robin tournament algorithm works fine when only to teams meet per game. But how does one implement it for tournaments of sports or games where more then two teams meet in the same game. For instance a paintball tournament where 2 to n teams meet in 2 to n games. Still keeping the constraint that all teams should be home teams once and only once if possible (if the teams cannot be evenly divided then it is acceptable that as few teams as possible will not be home team)
Any ideas?
The givens are number of teams, number of games. Possibly the number of team per game may be a given.
If you need 3 teams to play in the game you can use cubic represantation (so for n teams in the game it would be n-hypercube). That of course means that every possible pair of teams will play with every team - that's plenty of games. Total games played for each team is (n-1)(n-2)/2. Total games ever played is n*(n-1)(n-2)/3! (3 is number of teams per single game). So you can have (n-1)(n-2)/3! plays where every team plays as home.
So, in general if we have k teams playing per single game, total plays per single team is (n-1)!/(n-k)!(k-1)!. Total games are n!/(n-k)!k!, and you can have (n-1)!/(n-k)!k! games played as home.

Player rating for game with random teams

I am working on an algorithm to score individual players in a team-based game. The problem is that no fixed teams exist - every time 10 players want to play, they are divided into two (somewhat) even teams and play each other. For this reason, it makes no sense to score the teams, and instead we need to rely on individual player ratings.
There are a number of problems that I wish to take into account:
New players need some sort of provisional ranking to reach their "real" rating, before their rating counts the same as seasoned players.
The system needs to take into account that a team may consist of a mix of player skill levels - eg. one really good, one good, two mediocre, and one really poor. Therefore a simple "average" of player ratings probably won't suffice and it probably needs to be weighted in some way.
Ratings are adjusted after every game and as such the algorithm needs to be based on a per-game basis, not per "rating period". This might change if a good solution comes up (I am aware that Glicko uses a rating period).
Note that cheating is not an issue for this algorithm, since we have other measures of validating players.
I have looked at TrueSkill, Glicko and ELO (which is what we're currently using). I like the idea of TrueSkill/Glicko where you have a deviation that is used to determine how precise a rating is, but none of the algorithms take the random teams perspective into account and seem to be mostly based on 1v1 or FFA games.
It was suggested somewhere that you rate players as if each player from the winning team had beaten all the players on the losing team (25 "duels"), but I am unsure if that is the right approach, since it might wildly inflate the rating when a really poor player is on the winning team and gets a win vs. a very good player on the losing team.
Any and all suggestions are welcome!
EDIT: I am looking for an algorithm for established players + some way to rank newbies, not the two combined. Sorry for the confusion.
There is no AI and players only play each other. Games are determined by win/loss (there is no draw).
Provisional ranking systems are always imperfect, but the better ones (such as Elo) are designed to adjust provisional ratings more quickly than for ratings of established players. This acknowledges that trying to establish an ability rating off of just a few games with other players will inherently be error-prone.
I think you should use the average rating of all players on the opposing team as the input for establishing the provisional rating of the novice player, but handle it as just one game, not as N games vs. N players. Each game is really just one data sample, and the Elo system handles accumulation of these games to improve the ranking estimate for an individual player over time before switching over to the normal ranking system.
For simplicity, I would also not distinguish between established and provisional ratings for members of the opposing team when calculating a new provision rating for some member of the other team (unless Elo requires this). All of these ratings have implied error, so there is no point in adding unnecessary complications of probably little value in improving ranking estimates.
First off: It is very very unlikely that you will find a perfect system. Every system will have a flaw somewhere.
And to answer your question: Perhaps the ideas here will help: Lehman Rating on OkBridge.
This rating system is in use (since 1993!) on the internet bridge site called OKBridge. Bridge is a partnership game and is usually played with a team of 2 opposing another team of 2. The rating system was devised to rate the individual players and caters to the fact that many people play with different partners.
Without any background in this area, it seems to me a ranking systems is basically a statistical model. A good model will converge to a consistent ranking over time, and the goal would be to converge as quickly as possible. Several thoughts occur to me, several of which have been touched upon in other postings:
Clearly, established players have a track record and new players don't. So the uncertainty is probably greater for new players, although for inconsistent players it could be very high. Also, this probably depends on whether the game primarily uses innate skills or acquired skills. I would think that you would want a "variance" parameter for each player. The variance could be made up of two parts: a true variance and a "temperature". The temperature is like in simulated annealing, where you have a temperature that cools over time. Presumably, the temperature would cool to zero after enough games have been played.
Are there multiple aspects that come in to play? Like in soccer, you may have good shooters, good passers, guys who have good ball control, etc. Basically, these would be the degrees of freedom in you system (in my soccer analogy, they may or may not be truly independent). It seems like an accurate model would take these into account, of course you could have a black box model that implicitly handles these. However, I would expect understanding the number of degrees of freedom in you system would be helpful in choosing the black box.
How do you divide teams? Your teaming algorithm implies a model of what makes equal teams. Maybe you could use this model to create a weighting for each player and/or an expected performance level. If there are different aspects of player skills, maybe you could give extra points for players whose performance in one aspect is significantly better than expected.
Is the game truly win or lose, or could the score differential come in to play? Since you said no ties this probably doesn't apply, but at the very least a close score may imply a higher uncertainty in the outcome.
If you're creating a model from scratch, I would design with the intent to change. At a minimum, I would expect there may be a number of parameters that would be tunable, and might even be auto tuning. For example, as you have more players and more games, the initial temperature and initial ratings values will be better known (assuming you are tracking the statistics). But I would certainly anticipate that the more games have been played the better the model you could build.
Just a bunch of random thoughts, but it sounds like a fun problem.
There was an article in Game Developer Magazine a few years back by some guys from the TrueSkill team at Microsoft, explaining some of their reasoning behind the decisions there. It definitely mentioned teams games for Xbox Live, so it should be at least somewhat relevant. I don't have a direct link to the article, but you can order the back issue here: http://www.gdmag.com/archive/oct06.htm
One specific point that I remember from the article was scoring the team as a whole, instead of e.g. giving more points to the player that got the most kills. That was to encourage people to help the team win instead of just trying to maximize their own score.
I believe there was also some discussion on tweaking the parameters to try to accelerate convergence to an accurate evaluation of the player skill, which sounds like what you're interested in.
Hope that helps...
how is the 'scoring' settled?,
if a team would score 25 points in total (scores of all players in the team) you could divide the players score by the total team score * 100 to get the percentage of how much that player did for the team (or all points with both teams).
You could calculate a score with this data,
and if the percentage is lower than i.e 90% of the team members (or members of both teams):
treat the player as a novice and calculate the score with a different weighing factor.
sometimes an easier concept works out better.
The first question has a very 'gamey' solution. you can either create a newbie lobby for the first couple of games where the players can't see their score yet until they finish a certain amount of games that give you enough data for accurate rating.
Another option is a variation on the first but simpler-give them a single match vs AI that will be used to determine beginning score (look at quake live for an example).
For anyone who stumbles in here years after it was posted: TrueSkill now supports teams made up of multiple players and changing configurations.
Every time 10 players want to play,
they are divided into two (somewhat)
even teams and play each other.
This is interesting, as it implies both that the average skill level on each team is equal (and thus unimportant) and that each team has an equal chance of winning. If you assume this constraint to hold true, a simple count of wins vs losses for each individual player should be as good a measure as any.

Resources