Google Play Games Services unlocking multiple achievements with the same increment counter - google-play-games

I have a game and would like to add some achievements.
In this example lets say there are the following:
Won 5 Games
Won 10 Games
Won 100 Games
These 3 achievements share the same counter. But as far as I understand I need to create 3 different incremental achievements and post each of them if a game was won.
The only other alternative I see is, don't make incremental achievements and count locally.
Any other suggestions?

Your first assumption is correct. You would need to create three different incremental achievements and increment each one of them separately.
That said, this is normal and expected behavior. If you're worried about being throttled for hitting the service too much, the "increment three different achievements at once" quota is much more lenient than the "increment the same achievement three times in a row", so you should be fine. Plus, the Play Games library might be smart enough to submit this as a single batch call on your behalf.

Related

It is possible to use at the same time two players with jFugue

I'm trying to make a musical game (a bit like "Guitar Hero") and I'm having some issues to notify the player when he/she is getting a wrong 'note' played. Now is just a basic system but for example if the player has to push Up Arrow and missed it (no matter if the player clicks another one or didn't click any key at all), I want to make a noise or play an out of tune note.
I'm trying two ways:
a) to use a second player that, when the miss is detected, plays an out of tune chord.
b) to modify the volume of the pattern which is already being played.
With the first one I think it is just not possible to play 2 players at the same time and I'm gonna try this weekend to use a second thread. Nevertheless in theory I think it shouldn't work due to both players will use the same PC sound board for different instructions. This is the head of the error I get in return when I try this option:
Exception in thread "Thread-2" java.lang.NumberFormatException: Value
out of range. Value:"200" Radix:10
("200" is the volume value I gave to the miss note, but it doesn't matter how much I put because it always fails.)
With the second one I found no example of any pattern which is being reproduced and modified, or any question here that confirm it is possible.
Any idea of what should I try?
A Player plays music by connecting to a MIDI Sequencer, and there is only one of those on your system, so you can really have only one Player. (I'm realizing now that the JFugue API doesn't enforce that, and perhaps it should.)
MIDI has 16 channels, which JFugue calls Voices. Instead of having two Players, you could play the main melody of your song in Voice 0, and the player's feedback in, say, Voice 5. Notes in each of those voices could use different instruments. If the player gets the note right, the Voice 0 and Voice 5 notes will sound together. If they get it wrong, you can play something different in Voice 5, or if they don't play anything at all, you could play nothing on Voice 5.
If you are playing the Voice 0 and Voice 5 notes one at a time (instead of sending a full pattern at the beginning of the game), you could also modify the on velocity and off velocity of the note on Voice 0 to make it sound fainter.
(Voice 9 is the percussion track, so don't play on that!)
We would need to see some source code to see why you're getting that error with the volume.

How to choose matchups in an ELO ratings system as matchups accumulate

I'm working on a crowdsourced app that will pit about 64 fictional strongmen/strongwomen from different franchises against one another and try and determine who the strongest is. (Think "Batman vs. Spiderman" writ large). Users will choose the winner of any given matchup between two at a time.
After researching many sorting algorithms, I found this fantastic SO post outlining the ELO rating system, which seems absolutely perfect. I've read up on the system and understand both how to award/subtract points in a matchup and how to calculate the performance rating between any two characters based on past results.
What I can't seem to find is any efficient and sensible way to determine which two characters to pit against one another at a given time. Naturally it will start off randomly, but quickly points will accumulate or degrade. We can expect a lot of disagreement but also, if I design this correctly, a large amount of user participation.
So imagine you arrive at this feature after 50,000 votes have been cast. Given that we can expect all sorts of non-transitive results under the hood, and a fair amount of deviance from the performance ratings, is there a way to calculate which matchups I most need more data on? It doesn't seem as simple as choosing two adjacent characters in a sorted list with the closest scores, or just focusing at the top of the list.
With 64 entrants (and yes, I did consider and reject a bracket!), I'm not worried about recomputing the performance ratings after every matchup. I just don't know how to choose the next one, seeing as we'll be ignorant of each voter's biases and favorite characters.
The amazing variation that you experience with multiplayer games is that different people with different ratings "queue up" at different times.
By the ELO system, ideally all players should be matched up with an available player with the closest score to them. Since, if I understand correctly, the 64 "players" in your game are always available, this combination leads to lack of variety, as optimal match ups will always be, well, optimal.
To resolve this, I suggest implementing a priority queue, based on when your "players" feel like playing again. For example, if one wants to take a long break, they may receive a low priority and be placed towards the end of the queue, meaning it will be a while before you see them again. If one wants to take a short break, maybe after about 10 matches, you'll see them in a match again.
This "desire" can be done randomly, and you can assign different characteristics to each character to skew this behaviour, such as, "winning against a higher ELO player will make it more likely that this player will play again sooner". From a game design perspective, these personalities would make the characters seem more interesting to me, making me want to stick around.
So here you have an ordered list of players who want to play. I can think of three approaches you might take for the actual matchmaking:
Peek at the first 5 players in the queue and pick the best match up
Match the first player with their best match in the next 4 players in the queue (presumably waited the longest so should be queued immediately, regardless of the fairness of the match up)
A combination of both, where if the person at the head of the list doesn't get picked, they'll increase in "entropy", which affects the ELO calculation making them more likely to get matched up
Edit
On an implementation perspective, I'd recommend using a delta list instead of an actual priority queue since players should be "promoted" as they wait.
To avoid obvious winner vs looser situation you group the players in tiers.
Obviously, initially everybody will be in the same tier [0 - N1].
Then within the tier you make a rotational schedule so each two parties can "match" at least once.
However if you don't want to maintain schedule ...then always match with the party who participated in the least amount of "matches". If there are multiple of those make a random pick.
This way you ensure that everybody participates fairly the same amount of "matches".

Viable use of genetic algorithms to train neural nets in a poker bot?

I am designing a bot to play Texas Hold'Em Poker on tables of up to ten players, and the design includes a few feed forward neural networks (FFNN). These neural nets each have 8 to 12 inputs, 2 to 6 outputs, and 1 or 2 hidden layers, so there are a few hundred weights that I have to optimize. My main issue with training through back propagation is getting enough training data. I play poker in my spare time, but not enough to gather data on my own. I have looked into purchasing a few million hands off of a poker site, but I don't think my wallet will be very happy with me if I do... So, I have decided on approaching this by designing a genetic algorithm. I have seen examples of FFNNs being trained to play games like Super Mario and Tetris using genetic algorithms, but never for a game like poker, so I want to know if this is a viable approach to training my bot.
First, let me give a little background information (this may be confusing if you are unfamiliar with poker). I have a system in place that allows the bot to put its opponents on a specific range of hands so that it can make intelligent decisions accordingly, but it relies entirely on accurate output from three different neural networks:
NN_1) This determines how likely it is that an opponent is a) playing the actual value of his hand, b) bluffing, or c) playing a hand with the potential to become stronger later on.
NN_2) This assumes the opponent is playing the actual value of his hand and outputs the likely strength. It represents option (a) from the first neural net.
NN_3) This does the same thing as NN_2 but instead assumes the opponent is bluffing, representing option (b).
Then I have an algorithm for option (c) that does not use a FFNN. The outputs for (a), (b), and (c) are then combined based on the output from NN_1 to update my opponent's range.
Whenever the bot is faced with a decision (i.e. should it fold, call, or raise?), it calculates which is most profitable based on its opponents' hand ranges and how they are likely to respond to different bet sizes. This is where the fourth and final neural net comes in. It takes inputs based on properties unique to each player and the state of the table, and it outputs the likelihood of the opponent folding, calling, or raising.
The bot will also have a value for aggression (how likely it is to raise instead of call) and its opening range (which hands to play pre-flop). These four neural networks and two values will define each generation of bots in my genetic algorithm.
Here is my plan for training:
I will be simulating multiple large tournaments with 10n initial bots each with random values for everything. For the first few dozen tournaments, they will all be placed on tables of 10. They will play until either one bot is left or they play, say, 1,000 hands. If they reach that hand limit, the remaining bots will instantly go all-in every hand until one is left. After each table has completed, the most accurate FFNNs will be placed in the winning bot that will move on to the next round (even if the bot containing the best FFNN was not the winner). The winning bot will retain its aggression and opening range values. The tournament ends when only 100 bots remain, and random variations on those bots will generate the players for the next tournament. I'm assuming the first few tournaments will be complete chaos, so I don't want to narrow down my options too much early on.
If by some miracle, the bots actually develop a profitable, or at least somewhat coherent, strategy (I will check for this periodically), I will begin decreasing the amount of variation between bots. Anyone who plays poker could tell you that there are different types of players each with different strategies. I want to make sure that I am allowing enough room for different strategies to develop throughout this process. Then I may develop some sort of "super bot" that can switch between those different strategies if one is failing.
So, are there any glaring issue with this approach? If so, how would you recommend fixing them? Do you have any advice for speeding up this process or increasing my chances of success? I just want to make sure I'm not about to waste hundreds of hours on something doomed to fail. Also, if this site is not the correct place to be asking this question, please refer me to another website before flagging this. I would really appreciate it. Thanks all!
It will be difficult to use ANN for poker bot. It is better to think for expert system. You can use odds calculator to have numerical evaluation of the hand strength and after that expert system for money management (risk management). ANNs are good in other problems.

Prolog: Tournament Schedule

I am trying to automatically generate a schedule for a volleyball tournament I am organizing. It is mainly for fun. The rules are as follows:
Teams that sign up consist of 2 players
There are 36 such teams
For a match, 3 such teams get put together to a "game team", so a match consists of 3 teams vs. 3 teams
Every team plays 5 matches
There are 3 courts than can be played at the same time
Thus, a total of 10 rounds will be played (18 teams can play at the same time, so that is 36/2*5 rounds = 10 rounds)
Games are officialled by teams
Additional constraints are:
Every team officials at most once
If possible, a team should not play with another team that it already played with before (if it played against, it is fine)
There should not be more than 2 rounds break in between games for each team
Now I thought that this sounds like a problem that prolog is a good choice for. Unfortunately, I only have theoretical experience with it. It would be great if anybody could give me a good starting point with this, especially on how to fulfil constraints like "official at most once" and "every team plays 5 times". Also, a more compact representation of teams than
team(A).
team(B).
....
would be great. I already tried implementing this in Java,but came to the conclusion that it is not a well suited language. I'd like to do this in prolog now.

Relative Sorting Algorithm Based On 2 Related Variables

I have a bunch of video games I like, but I like some more than others. I have assigned them all a ranking 1 - 100 inclusive (not necessarily all integers are used and some may be used twice, although this can be changed if needed).
I have a list in excel with three columns: name of video game, rating (1-100), date last played, and a VBA macro button that says "kick" to bottom (more on this below).
When I assign a video game a ranking of 1, I want to be reminded to play it ideally every week.
When I assign a video game a ranking of 100, I want to be reminded to play it ideally every 52 weeks.
I defined reminded as "floating" to the top of this list in excel. So all games are slowly floating to the top (highly rated ones just faster than others) to remind me to play them. If I "kick" one to the bottom because I was reminded to play it, the counter/date on that video game starts over.
I'd imagine for this to work games will have to "pile up" at top, because many of them may have their due dates at a closely related times.
I think a perfect solution exists but it doesn't have to even be perfect.
How can I go about solving this problem?

Resources