How does Apollo paginated "read" and "merge" work? - graphql

I was reading through the docs to learn pagination approaches for Apollo. This is the simple example where they explain the paginated read function:
https://www.apollographql.com/docs/react/pagination/core-api#paginated-read-functions
Here is the relevant code snippet:
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
read(existing, { args: { offset, limit }}) {
// A read function should always return undefined if existing is
// undefined. Returning undefined signals that the field is
// missing from the cache, which instructs Apollo Client to
// fetch its value from your GraphQL server.
return existing && existing.slice(offset, offset + limit);
},
// The keyArgs list and merge function are the same as above.
keyArgs: [],
merge(existing, incoming, { args: { offset = 0 }}) {
const merged = existing ? existing.slice(0) : [];
for (let i = 0; i < incoming.length; ++i) {
merged[offset + i] = incoming[i];
}
return merged;
},
},
},
},
},
});
I have one major question around this snippet and more snippets from the docs that have the same "flaw" in my eyes, but I feel like I'm missing some piece.
Suppose I run a first query with offset=0 and limit=10. The server will return 10 results based on this query and store it inside cache after accessing merge function.
Afterwards, I run the query with offset=5 and limit=10. Based on the approach described in docs and the above code snippet, what I'm understanding is that I will get only the items from 5 through 10 instead of items from 5 to 15. Because Apollo will see that existing variable is present in read (with existing holding initial 10 items) and it will slice the available 5 items for me.
My question is - what am I missing? How will Apollo know to fetch new data from the server? How will new data arrive into cache after initial query? Keep in mind keyArgs is set to [] so the results will always be merged into a single item in the cache.

Apollo will not slice anything automatically. You have to define a merge function that keeps the data in the correct order in the cache. One approach would be to have an array with empty slots for data not yet fetched, and place incoming data in their respective index. For instance if you fetch items 30-40 out of a total of 100 your array would have 30 empty slots then your items then 60 empty slots. If you subsequently fetch items 70-80 those will be placed in their respective indexes and so on.
Your read function is where the decision on whether a network request is necessary or not will be made. If you find all the data in existing you will return them and no request to the server will be made. If any items are missing then you need to return undefined which will trigger a network request, then your merge function will be triggered once data is fetched, and finally your read function will run again only this time the data will be in the cache and it will be able to return them.
This approach is for the cache-first caching policy which is the default.
The logic for returning undefined from your read function will be implemented by you. There is no apollo magic under the hood.
If you use cache-and-network policy then a your read doesn't need to return undefined when data

Related

Firestore transaction produces console error: FAILED_PRECONDITION: the stored version does not match the required base version

I have written a bit of code that allows a user to upvote / downvote recipes in a manner similar to Reddit.
Each individual vote is stored in a Firestore collection named votes, with a structure like this:
{username,recipeId,value} (where value is either -1 or 1)
The recipes are stored in the recipes collection, with a structure somewhat like this:
{title,username,ingredients,instructions,score}
Each time a user votes on a recipe, I need to record their vote in the votes collection, and update the score on the recipe. I want to do this as an atomic operation using a transaction, so there is no chance the two values can ever become out of sync.
Following is the code I have so far. I am using Angular 6, however I couldn't find any Typescript examples showing how to handle multiple gets() in a single transaction, so I ended up adapting some Promise-based JavaScript code that I found.
The code seems to work, but there is something happening that is concerning. When I click the upvote/downvote buttons in rapid succession, some console errors occasionally appear. These read POST https://firestore.googleapis.com/v1beta1/projects/myprojectname/databases/(default)/documents:commit 400 (). When I look at the actual response from the server, I see this:
{
"error": {
"code": 400,
"message": "the stored version (1534122723779132) does not match the required base version (0)",
"status": "FAILED_PRECONDITION"
}
}
Note that the errors do not appear when I click the buttons slowly.
Should I worry about this error, or is it just a normal result of the transaction retrying? As noted in the Firestore documentation, a "function calling a transaction (transaction function) might run more than once if a concurrent edit affects a document that the transaction reads."
Note that I have tried wrapping try/catch blocks around every single operation below, and there are no errors thrown. I removed them before posting for the sake of making the code easier to follow.
Very interested in hearing any suggestions for improving my code, regardless of whether they're related to the HTTP 400 error.
async vote(username, recipeId, direction) {
let value;
if ( direction == 'up' ) {
value = 1;
}
if ( direction == 'down' ) {
value = -1;
}
// assemble vote object to be recorded in votes collection
const voteObj: Vote = { username: username, recipeId: recipeId , value: value };
// get references to both vote and recipe documents
const voteDocRef = this.afs.doc(`votes/${username}_${recipeId}`).ref;
const recipeDocRef = this.afs.doc('recipes/' + recipeId).ref;
await this.afs.firestore.runTransaction( async t => {
const voteDoc = await t.get(voteDocRef);
const recipeDoc = await t.get(recipeDocRef);
const currentRecipeScore = await recipeDoc.get('score');
if (!voteDoc.exists) {
// This is a new vote, so add it to the votes collection
// and apply its value to the recipe's score
t.set(voteDocRef, voteObj);
t.update(recipeDocRef, { score: (currentRecipeScore + value) });
} else {
const voteData = voteDoc.data();
if ( voteData.value == value ) {
// existing vote is the same as the button that was pressed, so delete
// the vote document and revert the vote from the recipe's score
t.delete(voteDocRef);
t.update(recipeDocRef, { score: (currentRecipeScore - value) });
} else {
// existing vote is the opposite of the one pressed, so update the
// vote doc, then apply it to the recipe's score by doubling it.
// For example, if the current score is 1 and the user reverses their
// +1 vote by pressing -1, we apply -2 so the score will become -1.
t.set(voteDocRef, voteObj);
t.update(recipeDocRef, { score: (currentRecipeScore + (value*2))});
}
}
return Promise.resolve(true);
});
}
According to Firebase developer Nicolas Garnier, "What you are experiencing here is how Transactions work in Firestore: one of the transactions failed to write because the data has changed in the mean time, in this case Firestore re-runs the transaction again, until it succeeds. In the case of multiple Reviews being written at the same time some of them might need to be ran again after the first transaction because the data has changed. This is expected behavior and these errors should be taken more as warnings."
In other words, this is a normal result of the transaction retrying.
I used RxJS throttleTime to prevent the user from flooding the Firestore server with transactions by clicking the upvote/downvote buttons in rapid succession, and that greatly reduced the occurrences of this 400 error. In my app, there's no legitimate reason someone would need to clip upvote/downvote dozens of times per seconds. It's not a video game.

GraphQL Relay hasNextPage

How does graphql generates hasNextPage if only "first" parameter passed?
I am using
return relay.connectionFromPromisedArray(
global.app.get('model__user').getUsers(args),
args
);
and query:
query RootQueryType { viewer { user(id: 1){ id,email,friends(first: 5) {edges {cursor, node { id, email } }, pageInfo { hasNextPage } } } } }
So how can i pass to graphql / relay friends count so hasNextPage will be generated correct?
Relay pagination is not page based, but rather cursor based. So you paginate by saying "I want X items after item Y". Item Y is not pointed to as a page number or an offset, but rather as a pointer to that exact object, a so-called cursor. This model of pagination is nice for, for example, infinite scrolling. "Pages" are also stable after adding or removing items, as they don't depend on number of items.
hasNextPage in Relay GraphQL spec just indicates whether there are more items after the last element that has been retrieved. So in your case, it means there are more than 5 elements in total and you'll get more elements if you do
friends(first: 5, after: "CURSOR_TO_THE_LAST_ELEMENT")
You can retrieve cursor from the edges list, it's one of the elements alongside node there.
You can find detailed information on the relay pagination algorithm here: https://facebook.github.io/relay/graphql/connections.htm#sec-Pagination-algorithm.
To answer your specific question about hasNextPage, this is the algorithm:
function hasNextPage(allEdges, before, after, first, last) {
// If first was not set, return false.
if (first === null) { return false; }
// Apply the before & after cursor arguments to the set of edges.
// i.e. edges is the set of edges between the before and after cursors
const edges = ApplyCursorsToEdges(allEdges, before, after)
// If more edges exist between the before & after cursors than
// you are asking for then there is a next page.
if (edges.length > first) { return true; }
return false
}
A quick note on cursor vs page based pagination. It is generally a bad idea to paginate using fixed page sizes. A classic example of this is using the OFFSET keyword in SQL to grab the next page. There are many issues with this approach. For example, what would happen if a new object was inserted while you were in the middle of paginating the set? If the new object was inserted before the page you are currently grabbing and you use a fixed offset you are going to grab an object that you have already grabbed which leads to duplicate data in your presentation layer. Using cursors for pagination fixes this problem by allowing you to keep track of the objects themselves instead of counts of the objects.
Once last thing with relay pagination specifically. I recommend only using (first & after) OR (last & before) at any given time. Using both in the same query can lead to logical, yet unexpected results.
Best of luck!

Parse cloud job set() function weirdness

I'm trying to run this cloud job weekly on parse where I assign a rank to players based on their high scores. This piece of code mostly seems to work except it only sets ranks from 1 to 9. Anything with more than one digit does not get set!
The job returns a success after setting ranks from 1-9.
Parse.Cloud.job("TestJob", function(request, status)
{
Parse.Cloud.useMasterKey();
var rank = 0;
var usersQuery = new Parse.Query("ECJUser").descending("HighScore");
usersQuery.find(function(results){
for(var i=0;i<results.length;++i)
{
rank += 1;
console.log("Setting "+results[i].get('Name')+" rank to "+rank);
results[i].save({"Rank": rank});
}
}).then(function(){
status.success("Weekly Ranks Assigned");
}, function(error){
status.error("Uh oh. Weekly ranking failed");
})
})
In the console log, it clearly says "setting playerName rank to 11", but it doesn't actually set anything in the parse table. Just undefined (or what ever it was previously).
Does the code look right? Something javascript related that I'm missing?
Updated based on answers:
Apparently I'm not waiting for the jobs to complete. But I'm not sure how to write code for handling promises. Here's what I have:
var usersQuery = new Parse.Query("ECJUser").descending("HighScore");
usersQuery.find().then(function(results)
{
var promises = [];
for(var i=0;i<results.length;i++)
{
promises.append(results[i].save({"Rank":rank}));
}
return promises;
})
What do I do with the list of promises? where do I wait for them to complete?
Your code does not wait for saves to complete so it's going to have unpredictable results. It also isn't going to run through all users, just the first 'page' returned by the query.
So, instead of using find you should consider using each. You also need to consider wether the job will have time to process all users and may need to run multiple times.
For the save you should add each promise that is returned to an array and then wait for all of the promises to complete before calling status.success.

How to cache IQueryable result for paging

What is the best way to cache Queryable result if every call need to calculate lot of things and return it to client.
Code Sample
[Queryable]
public IQueryable<Car> Get()
{
try
{
var result=GetCarList();
//GetCarList() calculation is taking around 1 min
return result.AsQueryable();
}
}
GetCarList()
{
var query = from car in db.CarDetail
where car.color == "white"
select car;
//10k records of white cars are selected with out considering makers
//white is mandatory
foreach (var car in query)
{
//Processing each record in every call
}
}
Query sample
First Page
localhost/api/Car?$filter=(make eq 'ford')&$orderby=carid desc&$top=10
Second Page
localhost/api/Car?$filter=(make eq 'ford')&$orderby=carid desc&$top=10$skip=10
Third Page
localhost/api/Car?$filter=(make eq 'ford')&$orderby=carid desc&$top=10$skip=20
Every time each call is taking 1 min even though the calculation is same for current filter. what is the best way to cache this kind of api call?
As the OP explains in his comment, the object to cache is the list returned by the call to GetCarList(); and the result is always the same.
You can simply store this in Cache, see docs: Cache Class.
When you need it, check if it's in cache. If not, create it and store in cache before using (anywhere you want to use it). As the Cache is thread safe you will not have concurernty problems by accesing it from different requests.

Model records ordering in Spine.js

As I can see in the Spine.js sources the Model.each() function returns Model's records in the order of their IDs. This is completely unreliable in scenarios where ordering is important: long person list etc.
Can you suggest a way to keep original records ordering (in the same order as they've arrived via refresh() or similar functions) ?
P.S.
Things are even worse because by default Spine.js internally uses new GUIDs as IDs. So records order is completely random which unacceptable.
EDIT:
Seems that in last commit https://github.com/maccman/spine/commit/116b722dd8ea9912b9906db6b70da7948c16948a
they made it possible, but I have not tested it myself because I switched from Spine to Knockout.
Bumped into the same problem learning spine.js. I'm using pure JS, so i was neglecting the the contact example http://spinejs.com/docs/example_contacts which helped out on this one. As a matter of fact, you can't really keep the ordering from the server this way, but you can do your own ordering with javascript.
Notice that i'm using the Element Pattern here. (http://spinejs.com/docs/controller_patterns)
First you set the function which is gonna do the sorting inside the model:
/*Extending the Student Model*/
Student.extend({
nameSort: function(a,b) {
if ((a.name || a.email) > (b.name || b.email))
return 1;
else
return -1
}
});
Then, in the students controller you set the elements using the sort:
/*Controller that manages the students*/
var Students = Spine.Controller.sub({
/*code ommited for simplicity*/
addOne: function(student){
var item = new StudentItem({item: student});
this.append(item.render());
},
addAll: function(){
var sortedByName = Student.all().sort(Student.nameSort);
var _self = this;
$.each(sortedByName, function(){_self.addOne(this)});
},
});
And that's it.

Resources