GraphQL Relay hasNextPage - graphql

How does graphql generates hasNextPage if only "first" parameter passed?
I am using
return relay.connectionFromPromisedArray(
global.app.get('model__user').getUsers(args),
args
);
and query:
query RootQueryType { viewer { user(id: 1){ id,email,friends(first: 5) {edges {cursor, node { id, email } }, pageInfo { hasNextPage } } } } }
So how can i pass to graphql / relay friends count so hasNextPage will be generated correct?

Relay pagination is not page based, but rather cursor based. So you paginate by saying "I want X items after item Y". Item Y is not pointed to as a page number or an offset, but rather as a pointer to that exact object, a so-called cursor. This model of pagination is nice for, for example, infinite scrolling. "Pages" are also stable after adding or removing items, as they don't depend on number of items.
hasNextPage in Relay GraphQL spec just indicates whether there are more items after the last element that has been retrieved. So in your case, it means there are more than 5 elements in total and you'll get more elements if you do
friends(first: 5, after: "CURSOR_TO_THE_LAST_ELEMENT")
You can retrieve cursor from the edges list, it's one of the elements alongside node there.

You can find detailed information on the relay pagination algorithm here: https://facebook.github.io/relay/graphql/connections.htm#sec-Pagination-algorithm.
To answer your specific question about hasNextPage, this is the algorithm:
function hasNextPage(allEdges, before, after, first, last) {
// If first was not set, return false.
if (first === null) { return false; }
// Apply the before & after cursor arguments to the set of edges.
// i.e. edges is the set of edges between the before and after cursors
const edges = ApplyCursorsToEdges(allEdges, before, after)
// If more edges exist between the before & after cursors than
// you are asking for then there is a next page.
if (edges.length > first) { return true; }
return false
}
A quick note on cursor vs page based pagination. It is generally a bad idea to paginate using fixed page sizes. A classic example of this is using the OFFSET keyword in SQL to grab the next page. There are many issues with this approach. For example, what would happen if a new object was inserted while you were in the middle of paginating the set? If the new object was inserted before the page you are currently grabbing and you use a fixed offset you are going to grab an object that you have already grabbed which leads to duplicate data in your presentation layer. Using cursors for pagination fixes this problem by allowing you to keep track of the objects themselves instead of counts of the objects.
Once last thing with relay pagination specifically. I recommend only using (first & after) OR (last & before) at any given time. Using both in the same query can lead to logical, yet unexpected results.
Best of luck!

Related

dart - Sort a list of futures

So I have a set of options that each contain an int value representing their ordinal.
These options are stored in a remote database with each option being a record.
As such when I fetch them from the db I end up with a list of future:
e.g. List<Future<Option>>
I need to be able to sort these Options.
The following dart pad shows a simplified view of what I'm trying to achieve:
https://dartpad.dartlang.org/a5175401516dbb9242a0edec4c89fef6
The Options MUST be futures.
My original solution was to copy the Options into a list, 'complete' them, then sort the list.
This however caused other problems and as such I need to do an 'insitu' sort on the original list.
You cannot sort the futures before they have completed, and even then, you need to extract the values first.
If you need to have a list of futures afterwards, this is what I would do:
List<Future<T>> sortFutures<T>(List<Future<T>> input, [int compare(T a, T b)]) {
var completers = [for (var i = 0; i < input.length; i++) Completer<T>()];
Future.wait(input).then((values) {
values.sort(compare);
for (int i = 0; i < values.length; i++) {
completers[i].complete(values[i]);
}
});
return [for (var c in completers) c.future];
}
This does not return the original futures because you don't know the ordering at the time you have to return them. It does return futures which complete with the same value.
If any of the futures completes with an error, then this blows up. You'll need more error handling if that is possible.
Gentlfolk,
thanks for the help.
julemand101 suggestion of using Future.wait() lead me to the answer.
It also helped me better understand the problem.
I've done a new gist that more accurately shows what I was attempting to do.
Essentilly when we do a db request over the network we get an entity back.
The problem is that the entity will often have references to other entities.
This can end in a whole tree of entities needing to be returned.
Often you don't need any of these entities.
So the solution we went for is to only return the database 'id' of each child entity (only the immediate children).
We then store those id's in a class RefId (see below).
The RefId is essentially a future that has the entities id and knows how to fetch the entity from the db.
When we actually need to access a child entity we force the RefId to complete (i.e. retrieve the entity across the network boundary).
We have a whole caching scheme to keep this performant as well as the ability to force the fetching of child elements, as part of the parent request, where we know up front they will be needed.
The options in my example are essentially menu items that need to be sorted.
But of course I can't sort them until they have been retrieved.
So a re-written example and answer:
https://dartpad.dartlang.org/369e71bb173ba3c19d28f6d6fec2072a
Here is the actual IdRef class we use:
https://dartpad.dartlang.org/ba892873a94d9f6f3924436e9fcd1b42
It now has a static resolveList method to help with this type of problem.
Thanks for your assistance.

Proper Upsert (Atomic Update Counter Field or Insert Document) with RethinkDB

After looking at some SO questions and issues on RethinkDB github, I failed to come to a clear conclusion if atomic Upsert is possible?
Essentially I would like to perform the same operation as ZINCRBY using Redis.
If member does not exist in the sorted set, it is added with increment
as its score (as if its previous score was 0.0). If key does not
exist, a new sorted set with the specified member as its sole member
is created.
The current implementation appears to differ from almost all databases that I have used. With the data being replaced or inserted not updated. This is a simple use case, like update the last visit, update the number of clicks, update a product quantity. So I must be missing something very obvious, because I cannot see a simple way to do this.
Yes, it is possible. After get on the key, perform an atomic replace. Something like this might work:
function set_or_increment_score(player, points){
return r.table('scores').get(player).replace(
row =>
{ id: player,
score: r.branch(
row.eq(null),
points,
row('score').add(points))
});
}
It has the following behaviour:
> set_or_increment_score("alice", 1).run(conn)
{ inserted: 1 }
> set_or_increment_score("alice", 2).run(conn)
{ replaced: 1 }
It works because get returns null when the document doesn't exist, and a replace on a non-existing document tuns into an insert. See the documentation for replace
So I end up using the following code to go around the no Update issue.
r.db("test").table("t").insert(
{id:"A", type:"player", species:"warrior", score:0, xp:0, armor:0},
{conflict: function(id, oldDoc, newDoc) {
return newDoc.merge(oldDoc).merge(
{armor: oldDoc("armor").add(1)});
}
}
)
Do you think this is more readable/elegant or do you see any issues with the code compared to your sample?

How to cache IQueryable result for paging

What is the best way to cache Queryable result if every call need to calculate lot of things and return it to client.
Code Sample
[Queryable]
public IQueryable<Car> Get()
{
try
{
var result=GetCarList();
//GetCarList() calculation is taking around 1 min
return result.AsQueryable();
}
}
GetCarList()
{
var query = from car in db.CarDetail
where car.color == "white"
select car;
//10k records of white cars are selected with out considering makers
//white is mandatory
foreach (var car in query)
{
//Processing each record in every call
}
}
Query sample
First Page
localhost/api/Car?$filter=(make eq 'ford')&$orderby=carid desc&$top=10
Second Page
localhost/api/Car?$filter=(make eq 'ford')&$orderby=carid desc&$top=10$skip=10
Third Page
localhost/api/Car?$filter=(make eq 'ford')&$orderby=carid desc&$top=10$skip=20
Every time each call is taking 1 min even though the calculation is same for current filter. what is the best way to cache this kind of api call?
As the OP explains in his comment, the object to cache is the list returned by the call to GetCarList(); and the result is always the same.
You can simply store this in Cache, see docs: Cache Class.
When you need it, check if it's in cache. If not, create it and store in cache before using (anywhere you want to use it). As the Cache is thread safe you will not have concurernty problems by accesing it from different requests.

Sorting a NotesDocumentCollection based on a date field in SSJS

Using Server side javascript, I need to sort a NotesDcumentCollection based on a field in the collection containing a date when the documents was created or any built in field when the documents was created.
It would be nice if the function could take a sort option parameter so I could put in if I want the result back in ascending or descending order.
the reason I need this is because I use database.getModifiedDocuments() which returns an unsorted notesdocumentcollection. I need to return the documents in descending order.
The following code is a modified snippet from openNTF which returns the collection in ascending order.
function sortColByDateItem(dc:NotesDocumentCollection, iName:String) {
try{
var rl:java.util.Vector = new java.util.Vector();
var tm:java.util.TreeMap = new java.util.TreeMap();
var doc:NotesNotesDocument = dc.getFirstDocument();
while (doc != null) {
tm.put(doc.getItemValueDateTimeArray(iName)[0].toJavaDate(), doc);
doc = dc.getNextDocument(doc);
}
var tCol:java.util.Collection = tm.values();
var tIt:java.util.Iterator = tCol.iterator();
while (tIt.hasNext()) {
rl.add(tIt.next());
}
return rl;
}catch(e){
}
}
When you construct the TreeMap, pass a Comparator to the constructor. This allows you to define custom sorting instead of "natural" sorting, which by default sorts ascending. Alternatively, you can call descendingMap against the TreeMap to return a clone in reverse order.
This is a very expensive methodology if you are dealing with large number of documents. I mostly use NotesViewEntrycollection (always sorted according to the source view) or view navigator.
For large databases, you may use a view, sorted according to the modified date and navigate through entries of that view until the most recent date your code has been executed (which you have to save it somewhere).
For smaller operations, Tim's method is great!

Model records ordering in Spine.js

As I can see in the Spine.js sources the Model.each() function returns Model's records in the order of their IDs. This is completely unreliable in scenarios where ordering is important: long person list etc.
Can you suggest a way to keep original records ordering (in the same order as they've arrived via refresh() or similar functions) ?
P.S.
Things are even worse because by default Spine.js internally uses new GUIDs as IDs. So records order is completely random which unacceptable.
EDIT:
Seems that in last commit https://github.com/maccman/spine/commit/116b722dd8ea9912b9906db6b70da7948c16948a
they made it possible, but I have not tested it myself because I switched from Spine to Knockout.
Bumped into the same problem learning spine.js. I'm using pure JS, so i was neglecting the the contact example http://spinejs.com/docs/example_contacts which helped out on this one. As a matter of fact, you can't really keep the ordering from the server this way, but you can do your own ordering with javascript.
Notice that i'm using the Element Pattern here. (http://spinejs.com/docs/controller_patterns)
First you set the function which is gonna do the sorting inside the model:
/*Extending the Student Model*/
Student.extend({
nameSort: function(a,b) {
if ((a.name || a.email) > (b.name || b.email))
return 1;
else
return -1
}
});
Then, in the students controller you set the elements using the sort:
/*Controller that manages the students*/
var Students = Spine.Controller.sub({
/*code ommited for simplicity*/
addOne: function(student){
var item = new StudentItem({item: student});
this.append(item.render());
},
addAll: function(){
var sortedByName = Student.all().sort(Student.nameSort);
var _self = this;
$.each(sortedByName, function(){_self.addOne(this)});
},
});
And that's it.

Resources