Firestore transaction produces console error: FAILED_PRECONDITION: the stored version does not match the required base version - promise

I have written a bit of code that allows a user to upvote / downvote recipes in a manner similar to Reddit.
Each individual vote is stored in a Firestore collection named votes, with a structure like this:
{username,recipeId,value} (where value is either -1 or 1)
The recipes are stored in the recipes collection, with a structure somewhat like this:
{title,username,ingredients,instructions,score}
Each time a user votes on a recipe, I need to record their vote in the votes collection, and update the score on the recipe. I want to do this as an atomic operation using a transaction, so there is no chance the two values can ever become out of sync.
Following is the code I have so far. I am using Angular 6, however I couldn't find any Typescript examples showing how to handle multiple gets() in a single transaction, so I ended up adapting some Promise-based JavaScript code that I found.
The code seems to work, but there is something happening that is concerning. When I click the upvote/downvote buttons in rapid succession, some console errors occasionally appear. These read POST https://firestore.googleapis.com/v1beta1/projects/myprojectname/databases/(default)/documents:commit 400 (). When I look at the actual response from the server, I see this:
{
"error": {
"code": 400,
"message": "the stored version (1534122723779132) does not match the required base version (0)",
"status": "FAILED_PRECONDITION"
}
}
Note that the errors do not appear when I click the buttons slowly.
Should I worry about this error, or is it just a normal result of the transaction retrying? As noted in the Firestore documentation, a "function calling a transaction (transaction function) might run more than once if a concurrent edit affects a document that the transaction reads."
Note that I have tried wrapping try/catch blocks around every single operation below, and there are no errors thrown. I removed them before posting for the sake of making the code easier to follow.
Very interested in hearing any suggestions for improving my code, regardless of whether they're related to the HTTP 400 error.
async vote(username, recipeId, direction) {
let value;
if ( direction == 'up' ) {
value = 1;
}
if ( direction == 'down' ) {
value = -1;
}
// assemble vote object to be recorded in votes collection
const voteObj: Vote = { username: username, recipeId: recipeId , value: value };
// get references to both vote and recipe documents
const voteDocRef = this.afs.doc(`votes/${username}_${recipeId}`).ref;
const recipeDocRef = this.afs.doc('recipes/' + recipeId).ref;
await this.afs.firestore.runTransaction( async t => {
const voteDoc = await t.get(voteDocRef);
const recipeDoc = await t.get(recipeDocRef);
const currentRecipeScore = await recipeDoc.get('score');
if (!voteDoc.exists) {
// This is a new vote, so add it to the votes collection
// and apply its value to the recipe's score
t.set(voteDocRef, voteObj);
t.update(recipeDocRef, { score: (currentRecipeScore + value) });
} else {
const voteData = voteDoc.data();
if ( voteData.value == value ) {
// existing vote is the same as the button that was pressed, so delete
// the vote document and revert the vote from the recipe's score
t.delete(voteDocRef);
t.update(recipeDocRef, { score: (currentRecipeScore - value) });
} else {
// existing vote is the opposite of the one pressed, so update the
// vote doc, then apply it to the recipe's score by doubling it.
// For example, if the current score is 1 and the user reverses their
// +1 vote by pressing -1, we apply -2 so the score will become -1.
t.set(voteDocRef, voteObj);
t.update(recipeDocRef, { score: (currentRecipeScore + (value*2))});
}
}
return Promise.resolve(true);
});
}

According to Firebase developer Nicolas Garnier, "What you are experiencing here is how Transactions work in Firestore: one of the transactions failed to write because the data has changed in the mean time, in this case Firestore re-runs the transaction again, until it succeeds. In the case of multiple Reviews being written at the same time some of them might need to be ran again after the first transaction because the data has changed. This is expected behavior and these errors should be taken more as warnings."
In other words, this is a normal result of the transaction retrying.
I used RxJS throttleTime to prevent the user from flooding the Firestore server with transactions by clicking the upvote/downvote buttons in rapid succession, and that greatly reduced the occurrences of this 400 error. In my app, there's no legitimate reason someone would need to clip upvote/downvote dozens of times per seconds. It's not a video game.

Related

How does Apollo paginated "read" and "merge" work?

I was reading through the docs to learn pagination approaches for Apollo. This is the simple example where they explain the paginated read function:
https://www.apollographql.com/docs/react/pagination/core-api#paginated-read-functions
Here is the relevant code snippet:
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
read(existing, { args: { offset, limit }}) {
// A read function should always return undefined if existing is
// undefined. Returning undefined signals that the field is
// missing from the cache, which instructs Apollo Client to
// fetch its value from your GraphQL server.
return existing && existing.slice(offset, offset + limit);
},
// The keyArgs list and merge function are the same as above.
keyArgs: [],
merge(existing, incoming, { args: { offset = 0 }}) {
const merged = existing ? existing.slice(0) : [];
for (let i = 0; i < incoming.length; ++i) {
merged[offset + i] = incoming[i];
}
return merged;
},
},
},
},
},
});
I have one major question around this snippet and more snippets from the docs that have the same "flaw" in my eyes, but I feel like I'm missing some piece.
Suppose I run a first query with offset=0 and limit=10. The server will return 10 results based on this query and store it inside cache after accessing merge function.
Afterwards, I run the query with offset=5 and limit=10. Based on the approach described in docs and the above code snippet, what I'm understanding is that I will get only the items from 5 through 10 instead of items from 5 to 15. Because Apollo will see that existing variable is present in read (with existing holding initial 10 items) and it will slice the available 5 items for me.
My question is - what am I missing? How will Apollo know to fetch new data from the server? How will new data arrive into cache after initial query? Keep in mind keyArgs is set to [] so the results will always be merged into a single item in the cache.
Apollo will not slice anything automatically. You have to define a merge function that keeps the data in the correct order in the cache. One approach would be to have an array with empty slots for data not yet fetched, and place incoming data in their respective index. For instance if you fetch items 30-40 out of a total of 100 your array would have 30 empty slots then your items then 60 empty slots. If you subsequently fetch items 70-80 those will be placed in their respective indexes and so on.
Your read function is where the decision on whether a network request is necessary or not will be made. If you find all the data in existing you will return them and no request to the server will be made. If any items are missing then you need to return undefined which will trigger a network request, then your merge function will be triggered once data is fetched, and finally your read function will run again only this time the data will be in the cache and it will be able to return them.
This approach is for the cache-first caching policy which is the default.
The logic for returning undefined from your read function will be implemented by you. There is no apollo magic under the hood.
If you use cache-and-network policy then a your read doesn't need to return undefined when data

GraphQL Authorization / Permission

So basically how do you handle permissions?
Let's say we have a list of Post(s) of some kind, with an argument first to limit the amount of posts. And only the owner and approved users can read the posts, everyone else can't. What is the best way to implement this?
query {
{
viewer {
posts(first: 10) {
id
text
}
}
}
}
What I'm currently thinking of, is to have a single source of truth to whether a user can read the post or not, and hook it up with the dataloader module.
But, how do I query for exactly 10 posts? If I query my DB for exactly 10 rows, when I then later on filter them with some business logic, then I can get for example 8 posts returned.
A solution is to not put a limit on the query, but that's not very efficient. So what is a good way to go about this?
Inspiration from here
(1) https://dev-blog.apollodata.com/auth-in-graphql-part-2-c6441bcc4302
(2) https://dev-blog.apollodata.com/graphql-at-facebook-by-dan-schafer-38d65ef075af
(1) solved it by
export const DB = {
Lists: {
all: (user_id) => {
return sql.raw("SELECT id FROM lists WHERE owner_id is NULL or owner_id = %s, user_id);
}
}
}
as the query, and then to filter out which rows can be read:
resolve: (root, _, ctx) => {
// factor out data fetching
return DB.Lists.all(ctx.user_id)
.then( lists => {
// enforce auth on each node
return lists.map(auth.List.enforce_read_perm(ctx.user_id));
});
}
So, we can clearly see that it's querying for all the rows, even if, say, the first argument was 1, which is what I'm trying to avoid.
Maybe I'm approaching the problem wrong in some way, as the business logic lives on another layer than the DB one, so there's no way but to query all the rows. Any help appreciated.
For future reference and other people searching for solutions.
Used Dataloader to solve the authentication problem.
Literally implemented what they did in https://dev-blog.apollodata.com/graphql-at-facebook-by-dan-schafer-38d65ef075af and used this boilerplate repo as guidance. Not much more to say than that.

Parse cloud job set() function weirdness

I'm trying to run this cloud job weekly on parse where I assign a rank to players based on their high scores. This piece of code mostly seems to work except it only sets ranks from 1 to 9. Anything with more than one digit does not get set!
The job returns a success after setting ranks from 1-9.
Parse.Cloud.job("TestJob", function(request, status)
{
Parse.Cloud.useMasterKey();
var rank = 0;
var usersQuery = new Parse.Query("ECJUser").descending("HighScore");
usersQuery.find(function(results){
for(var i=0;i<results.length;++i)
{
rank += 1;
console.log("Setting "+results[i].get('Name')+" rank to "+rank);
results[i].save({"Rank": rank});
}
}).then(function(){
status.success("Weekly Ranks Assigned");
}, function(error){
status.error("Uh oh. Weekly ranking failed");
})
})
In the console log, it clearly says "setting playerName rank to 11", but it doesn't actually set anything in the parse table. Just undefined (or what ever it was previously).
Does the code look right? Something javascript related that I'm missing?
Updated based on answers:
Apparently I'm not waiting for the jobs to complete. But I'm not sure how to write code for handling promises. Here's what I have:
var usersQuery = new Parse.Query("ECJUser").descending("HighScore");
usersQuery.find().then(function(results)
{
var promises = [];
for(var i=0;i<results.length;i++)
{
promises.append(results[i].save({"Rank":rank}));
}
return promises;
})
What do I do with the list of promises? where do I wait for them to complete?
Your code does not wait for saves to complete so it's going to have unpredictable results. It also isn't going to run through all users, just the first 'page' returned by the query.
So, instead of using find you should consider using each. You also need to consider wether the job will have time to process all users and may need to run multiple times.
For the save you should add each promise that is returned to an array and then wait for all of the promises to complete before calling status.success.

Parse.User query not working in Cloud Code

I am working on a project using Parse where I need some information calculated for each user and updated when they update their account. I created a Cloud Code trigger that does what I need whenever a user account is updated, and that is working well. However, I have about two thousand accounts that are already created that I need to update as well. After hours of trying to get a Cloud Job to work, I decided to try to simplify it. I wrote the following job to simply count the user accounts. To reiterate; I'm not actually trying to count the users, there are much more efficient ways to do that, I am trying to verify that I can query and loop over the existing user accounts. (The option to useMasterKey is in there because I will need that later.)
Parse.Cloud.job("getUserStatistics", function(request, status) {
// Set up to modify user data
Parse.Cloud.useMasterKey();
// Query for all users
var query = new Parse.Query(Parse.User);
var counter = 0;
query.each(function(user) {
counter = counter+1;
}).then(function() {
// Set the job's success status
status.success("Counted all User Accounts.");
}, function(error) {
// Set the job's error status
status.error("Failed to Count User Accounts.");
});
console.log('Found '+counter+' users.');
});
When I run the code, I get:
I2015-07-09T17:29:10.880Z]Found 0 users.
I2015-07-09T17:29:12.863Z]v99: Ran job getUserStatistics with:
Input: "{}"
Result: Counted all User Accounts.
Even more baffling to me, if I add:
query.limit(10);
...the query itself actually fails! (I would expect it to count 10 users.)
That said, if there is a simpler way to trigger an update on all the users in a Parse application, I'd love to hear it!
The reference actually says that:
The query may not have any sort order, and may not use limit or skip.
https://parse.com/docs/js/api/symbols/Parse.Query.html#each
So forget about "query.limit(10)", that's not relevant here.
Anyways, by their example for a background job, it seems you might have forgotten to put return in your "each" function. Plus, you called console.log('Found '+counter+' users.'); out side of your asynchronous task, that makes sense why you get 0 results. maybe you try:
query.each(function(user) {
counter = counter+1;
// you'll want to save your changes for each user,
// therefore, you will need this
return user.save();
}).then(function() {
// Set the job's success status
status.success("Counted all User Accounts.");
// console.log inside the asynchronous scope
console.log('Found '+counter+' users.');
}, function(error) {
// Set the job's error status
status.error("Failed to Count User Accounts.");
});
You can check again Parse's example of writing this cloud job.
https://parse.com/docs/js/guide#cloud-code-advanced-writing-a-background-job

Document concurrent update

I have a document like:
{
owner: 'alex',
live: 'some guid'
}
Two or more users can update live field simultaneously.
How can I make sure that only the first user wins and others updates fails?
You can get the semantics you want if you store some variable like "times_updated" in the document. Operations on a single document are atomic, so you can check that the field is the value you expect, and then throw an error if it isn't.
It might look something like:
var timesUpdated = 3
r.table('foo').get(rowId).update(function(row) {
return r.branch(row('timesUpdated').eq(timesUpdated),
{
timesUpdated: row('timesUpdated').add(1),
live: 'some special value'
},
r.error('Someone else updated the live field!')
);
}, {returnChanges: true})
So if another query comes in before you for timesUpdated = 3, your query will blow up. When do you get timesUpdated? That depends on how your app is designed, and what you're trying to do.
Another thing to note is that adding {returnChanges: true} is really useful because it allows you to get the new value of timesUpdated atomically. You can also see what exactly changed in the updated document.

Resources