How to decide who is the first player when user plays quick game? - google-play-games

I am using following code in onRoomConnected(int statusCode, Room room) for deciding who is the first player. But some times I am getting first/second for both players same. How to resolve this error.
if (quickGame) {
myTurn = room.getParticipants().get(0).getParticipantId().equals(myId);
} else {
myTurn = room.getCreatorId().equals(myId);
}
if (myTurn) {
Log.e(TAG, "First Player");
status.setText("Click a button to start");
} else {
Log.e(TAG, "Second Player");
status.setText("Wait for opponent to start");
}

The set of participant IDs is guaranteed to be the same to everybody who is in the room (but not across different matches). However, their order in the list is not guaranteed. So if you want to make an easy election (e.g. establish who goes first, etc), you must rely on the set of participant IDs, but not in the order. Some of the ways you can accomplish this are:
The participant ID that comes alphabetically first is the first player to play
The participant ID that comes alphabetically first is responsible for randomly electing a player to start first. The other clients will wait until that participants sends a reliable real time message containing the ID of the elected participant.
Method (2) is preferred, because it doesn't contain a possible bias.
Why? Although we don't specify what's the structure of a participant ID (it's just a string), the truth is that it does encode information, so if you use the participant ID as a rule, you might end up with a weird distribution of who goes first. For example, you might find that a particular player is always going first, but that's because, coincidentally, their participant ID is generated in such a way that this always happens. So it's definitely a better idea to use the participant ID to elect who is the authority to randomly decide who goes first, not who actually goes first.

One way to approach is to track the participantIDs, as they are generated on a per room basis. This is how I do it in my Android Code
#Override
public ArrayList<String> getActiveRoomPlayerIDs() {
if(mRoomCurrent != null) {
ArrayList<String> newList = new ArrayList<String>();
for (Participant p : mRoomCurrent.getParticipants()) {
dLog(listIgnoreTheseIDs.toString() + " is the list to ignore");
if(mRoomCurrent.getParticipantStatus(p.getParticipantId()) == Participant.STATUS_LEFT) {
dLog(p.getParticipantId() + " left the room");
} else {
newList.add(p.getParticipantId());
}
}
return newList;
}
return null;
}
The reason I approach like this, is, in case the room participants change during the game I can use this same approach to handle their leaving the room.
onRoomConnected is called for ALL opponents with the SAME number of participants, there is no varying number of how many are in that call
Added here for the edit.. in my libGDX side I then do this
private ArrayList<String> SortThisList(ArrayList<String> currentRoomIds) {
Collections.sort(currentRoomIds);
return currentRoomIds;
}
Then I use the sorted list to determine player order...

If there is no other criteria that matters you can use my technique. Simply pick the alphabetically smallest participant id as server after room formation is complete.
/**checks is this user is the alphabetically smallest participant id.
* if so then the user is server.
* #return if this user should be the server.
*/
private boolean isServer()
{
for(Participant p : mParticipants )
{
if(p.getParticipantId().compareTo(mMyId)<0)
return false;
}
return true;
}

I would suggest the following method:
When device A gets response from Google that room is connected, check if any other participants are present. If none are present assign device A as player 1.
When device B gets response from Google, it will find that there is another participant other than itself. In this case wait.
In device A, you will get a notification that participant has connected, start the game now and send appropriate message to device B.

Related

Use Query Result as Argument in Next Level in GraphQL

Hullo everyone,
This has been discussed a bit before, but it's one of those things where there is so much scattered discussion resulting in various proposed "hacks" that I'm having a hard time determining what I should do.
I would like to use the result of a query as an argument for another nested query.
query {
allStudents {
nodes {
courseAssessmentInfoByCourse(courseId: "2b0df865-d7c6-4c96-9f10-992cd409dedb") {
weightedMarkAverage
// getting result for specific course is easy enough
}
coursesByStudentCourseStudentIdAndCourseId {
nodes {
name
// would like to be able to do something like this
// to get a list of all the courses and their respective
// assessment infos
assessmentInfoByStudentId (studentId: student_node.studentId) {
weightedMarkAverage
}
}
}
}
}
}
Is there a way of doing this that is considered to be best practice?
Is there a standard way to do it built into GraphQL now?
Thanks for any help!
The only means to substitute values in a GraphQL document is through variables, and these must be declared in your operation definition and then included alongside your document as part of your request. There is no inherent way to reference previously resolved values within the same document.
If you get to a point where you think you need this functionality, it's generally a symptom of poor schema design in the first place. What follows are some suggestions for improving your schema, assuming you have control over that.
For example, minimally, you could eliminate the studentId argument on assessmentInfoByStudentId altogether. coursesByStudentCourseStudentIdAndCourseId is a field on the student node, so its resolver can already access the student's id. It can pass this information down to each course node, which can then be used by assessmentInfoByStudentId.
That said, you're probably better off totally rethinking how you've got your connections set up. I don't know what your underlying storage layer looks like, or the shape your client needs the data to be in, so it's hard to make any specific recommendations. However, for the sake of example, let's assume we have three types -- Course, Student and AssessmentInfo. A Course has many Students, a Student has many Courses, and an AssessmentInfo has a single Student and a single Course.
We might expose all three entities as root level queries:
query {
allStudents {
# fields
}
allCourses {
# fields
}
allAssessmentInfos {
# fields
}
}
Each node could have a connection to the other two types:
query {
allStudents {
courses {
edges {
node {
id
}
}
}
assessmentInfos {
edges {
node {
id
}
}
}
}
}
If we want to fetch all students, and for each student know what courses s/he is taking and his/her weighted mark average for that course, we can then write a query like:
query {
allStudents {
assessmentInfos {
edges {
node {
id
course {
id
name
}
}
}
}
}
}
Again, this exact schema might not work for your specific use case but it should give you an idea around how you can approach your problem from a different angle. A couple more tips when designing a schema:
Add filter arguments on connection fields, instead of creating separate fields for each scenario you need to cover. A single courses field on a Student type can have a variety of arguments like semester, campus or isPassing -- this is cleaner and more flexible than creating different fields like coursesBySemester, coursesByCampus, etc.
If you're dealing with aggregate values like average, min, max, etc. it might make sense to expose those values as fields on each connection type, in the same way a count field is sometimes available alongside the nodes field. There's a (proposal)[https://github.com/prisma/prisma/issues/1312] for Prisma that illustrates one fairly neat way to do handle these aggregate values. Doing something like this would mean if you already have, for example, an Assessment type, a connection field might be sufficient to expose aggregate data about that type (like grade averages) without needing to expose a separate AssessmentInfo type.
Filtering is relatively straightforward, grouping is a bit tougher. If you do find that you need the nodes of a connection grouped by a particular field, again this may be best done by exposing an additional field on the connection itself, (like Gatsby does it)[https://www.gatsbyjs.org/docs/graphql-reference/#group].

Firestore transaction produces console error: FAILED_PRECONDITION: the stored version does not match the required base version

I have written a bit of code that allows a user to upvote / downvote recipes in a manner similar to Reddit.
Each individual vote is stored in a Firestore collection named votes, with a structure like this:
{username,recipeId,value} (where value is either -1 or 1)
The recipes are stored in the recipes collection, with a structure somewhat like this:
{title,username,ingredients,instructions,score}
Each time a user votes on a recipe, I need to record their vote in the votes collection, and update the score on the recipe. I want to do this as an atomic operation using a transaction, so there is no chance the two values can ever become out of sync.
Following is the code I have so far. I am using Angular 6, however I couldn't find any Typescript examples showing how to handle multiple gets() in a single transaction, so I ended up adapting some Promise-based JavaScript code that I found.
The code seems to work, but there is something happening that is concerning. When I click the upvote/downvote buttons in rapid succession, some console errors occasionally appear. These read POST https://firestore.googleapis.com/v1beta1/projects/myprojectname/databases/(default)/documents:commit 400 (). When I look at the actual response from the server, I see this:
{
"error": {
"code": 400,
"message": "the stored version (1534122723779132) does not match the required base version (0)",
"status": "FAILED_PRECONDITION"
}
}
Note that the errors do not appear when I click the buttons slowly.
Should I worry about this error, or is it just a normal result of the transaction retrying? As noted in the Firestore documentation, a "function calling a transaction (transaction function) might run more than once if a concurrent edit affects a document that the transaction reads."
Note that I have tried wrapping try/catch blocks around every single operation below, and there are no errors thrown. I removed them before posting for the sake of making the code easier to follow.
Very interested in hearing any suggestions for improving my code, regardless of whether they're related to the HTTP 400 error.
async vote(username, recipeId, direction) {
let value;
if ( direction == 'up' ) {
value = 1;
}
if ( direction == 'down' ) {
value = -1;
}
// assemble vote object to be recorded in votes collection
const voteObj: Vote = { username: username, recipeId: recipeId , value: value };
// get references to both vote and recipe documents
const voteDocRef = this.afs.doc(`votes/${username}_${recipeId}`).ref;
const recipeDocRef = this.afs.doc('recipes/' + recipeId).ref;
await this.afs.firestore.runTransaction( async t => {
const voteDoc = await t.get(voteDocRef);
const recipeDoc = await t.get(recipeDocRef);
const currentRecipeScore = await recipeDoc.get('score');
if (!voteDoc.exists) {
// This is a new vote, so add it to the votes collection
// and apply its value to the recipe's score
t.set(voteDocRef, voteObj);
t.update(recipeDocRef, { score: (currentRecipeScore + value) });
} else {
const voteData = voteDoc.data();
if ( voteData.value == value ) {
// existing vote is the same as the button that was pressed, so delete
// the vote document and revert the vote from the recipe's score
t.delete(voteDocRef);
t.update(recipeDocRef, { score: (currentRecipeScore - value) });
} else {
// existing vote is the opposite of the one pressed, so update the
// vote doc, then apply it to the recipe's score by doubling it.
// For example, if the current score is 1 and the user reverses their
// +1 vote by pressing -1, we apply -2 so the score will become -1.
t.set(voteDocRef, voteObj);
t.update(recipeDocRef, { score: (currentRecipeScore + (value*2))});
}
}
return Promise.resolve(true);
});
}
According to Firebase developer Nicolas Garnier, "What you are experiencing here is how Transactions work in Firestore: one of the transactions failed to write because the data has changed in the mean time, in this case Firestore re-runs the transaction again, until it succeeds. In the case of multiple Reviews being written at the same time some of them might need to be ran again after the first transaction because the data has changed. This is expected behavior and these errors should be taken more as warnings."
In other words, this is a normal result of the transaction retrying.
I used RxJS throttleTime to prevent the user from flooding the Firestore server with transactions by clicking the upvote/downvote buttons in rapid succession, and that greatly reduced the occurrences of this 400 error. In my app, there's no legitimate reason someone would need to clip upvote/downvote dozens of times per seconds. It's not a video game.

Merging a dynamic number of collections together

I'm working on my first laravel project: a family tree. I have 4 branches of the family, each with people/families/images/stories/etc. A given user on the website will have access to everything for 1, 2, or 4 of these branches of the family (I don't want to show a cousin stuff for people they're not related to).
So on various pages I want the collections from the controller to contain stuff based on the given user's permissions. Merge seems like the right way to do this.
I have scopes to get people from each branch of the family, and in the following example I also have a scope for people with a birthday this month. In order to show the right set of birthdays for this user, I can get this by merging each group individually if they have access.
Here's what my function would look like if I showed everyone in all 4 family branches:
public function get_birthday_people()
{
$user = \Auth::user();
$jones_birthdays = Person::birthdays()->jones()->get();
$smith_birthdays = Person::birthdays()->smith()->get();
$lee_birthdays = Person::birthdays()->lee()->get();
$brandt_birthdays = Person::birthdays()->brandt()->get();
$birthday_people = $jones_birthdays
->merge($smith_birthdays)
->merge($lee_birthdays )
->merge($brandt_birthdays );
return $birthday_people;
My challenge: I'd like to modify it so that I check the user's access and only add each group of people accordingly. I'm imagining something where it's all the same as above except I add conditionals like this:
if($user->jones_access) {
$jones_birthdays = Person::birthdays()->jones()->get();
}
else{
$jones_birthdays =NULL;
}
But that throws an error for users without access because I can't call merge on NULL (or an empty array, or the other versions of 'nothing' that I tried).
What's a good way to do something like this?
if($user->jones_access) {
$jones_birthdays = Person::birthdays()->jones()->get();
}
else{
$jones_birthdays = new Collection;
}
Better yet, do the merge in the condition, no else required.
$birthday_people = new Collection;
if($user->jones_access) {
$birthday_people->merge(Person::birthdays()->jones()->get());
}
You are going to want your Eloquent query to only return the relevant data for the user requesting it. It doesn't make sense to query Lee birthdays when a Jones person is accessing that page.
So what you will wind up doing is something like
$birthdays = App\Person::where('family', $user->family)->get();
This pulls in Persons where their family property is equal to the family of the current user.
This probably does not match the way you have your relationships right now, but hopefully it will get you on the right track to getting them sorted out.
If you really want to go ahead with a bunch of queries and checking for authorization, read up on the authorization features of Laravel. It will give let you assign abilities to users and check them easily.

How to prevent multiple users from adding an item to a Sharepoint list simultaneously

I am using a simple form to allow people to sign up for an event. Their details are saved to a Sharepoint list. I have a quota of people who can sign up for an event (say 100 people).
How can I prevent the 100th and the 101st person from signing up concurrently, causing the quota check to allow the 101st person to sign up (because the 100th person isn't in the list yet)?
Place the ItemAdding code inside a lock statement to make sure that only one thread at a time can enter the critical section of code:
private Object _lock = new Object();
public override void ItemAdding(SPItemEventProperties properties)
{
lock(_lock)
{
// check number of the list items and cancel the event if necessary
}
}
I came up with this idea of a solution for a farm with multiple WFEs - a shared resource (a row in a table in pseudo-code above) gets locked during the time the item is added to the list:
private Object _lock = new Object();
public override void ItemAdding(SPItemEventProperties properties)
{
try
{
// 1. begin a SQL Server transaction
// 2. UPDATE dbo.SEMAPHORE
// SET STATUS = 'Busy'
// WHERE PROCESS = 'EventSignup'
lock(_lock)
{
// 3. check number of the list items and cancel the event if necessary
}
}
finally
{
// 4. UPDATE dbo.SEMAPHORE
// SET STATUS = ''
// WHERE PROCESS = 'EventSignup'
// 5. commit a SQL Server transaction
}
}
I left the lock statement because I'm not sure what will happen if the same front-end server tries to add the item #100 and #101 - will the transaction lock the row or will it not because the same connection to SQL Server will be used?
So then you can use event receivers item adding method. at item adding, your item is not created, you can calculate the current count of signed up people. if it is bigger then 100 you can cancel item adding.
but sure, more than one item adding method can be fired, to prevent that you can calculate the current count of people and increase the count +1, and keep that value somewhere else (on a field on event item perhaps) and all item adding methods can check that value before adding the item.
item added method is too late for these operations.
this would be the solution i would use.
I guess if you are updating a column, lets say - "SignUp Count", then one of the users will get the Save Conflict issue. Whoever updated the value for the first time wins and the second one will fail.
Regards,
Nitin Rastogi

Implementing a heirarchial tree

Any idea how I can get started on building a heirachial tree? This tree is passed an employeeID and a managerID. The links between nodes imply a relationship- a node higher up in the tree is a manager of nodes lower down. However, we want operations on the tree to be efficient e.g. search should be O(lg n). Any ideas? Is this even possible?
EDIT:
I am in genuine need of help. Might I inquire why this question is being closed?
I would have a tree to manage the relationships, while maintaining a map to keep track of the nodes themselves.
note that I didn't implement the hire, fire, or promote methods. They're pretty simple and are a little beyond the scope of the basic structure (they're self explanatory from the code below. If they don't jump out at you right away, then you need to study how it works a little more for your own sake!)
class OrgChart {
// Assume these are properly constructed, etc...
static class Employee {
String name;
EmployeeID id;
Employee manager;
Set<Employee> underlings;
}
static class EmployeeID {
// what is the id? id number? division + badge number?
// doesn't matter, as long as it has hashCode() and equals()
}
Map<EmployeeID, Employee> employeesById = new HashMap...
Employee ceo = new CEO.getTheCEO();
public Employee getManagerfor(EmployeeID id) {
Employee dilbert = employeesById.get(id);
return dilbert.manager;
}
public Set<Employees> getEmployeesUnder(EmployeeID phbid) {
Employee phb = employeesbyId.get(phbid);
return phb.underlings;
}
}
you create an object which contains 2 properties:
Manager (hierarchical up)
Employees (hierarchical down) --> is a collection
thats it
you could even realize it without the relation to the manager if you always start at the "big boss" and do a top down search
Well, with any tree, I feel that you should treat the nodes as equal, in that any node can contain subnodes (including the subnodes). With that in mind, for most trees I tend to take a parent -> child approach, for example:
User Table:
ID ParentID Name
1 0 Joe
2 0 Sally
3 2 Jim
Now, in this table, Joe & Sally are root level, while Jim is a child (employee in this case) of Sally. This can continue, with other users being children of users that are, even themselves, children of others and so on....
The benefit of this approach is that you make all users equal in the eyes of the tree control. You won't need customized object collections for each specific level, or complex logic for determining user types and injecting them in to the right node. If you have to sort them manually, all you need is a simple recursive function to check the parents for children based on their ID (with 0 as root in this example).
As for the actual implementation, at least in ASP.NET, I would suggest looking in to the use of a TreeView, HierarchicalDataSourceControl, and a HierarchicalDataSourceView. Together these three objects let you implement data trees relatively quickly, and the patterns used are pretty generic, so you can adapt it to use future data objects.

Resources