Should the opaque cursors in connections be stable across different field args? - graphql

The RANGE_ADD mutation requires an edgeName so that it can insert the new edge into the client side connection. As part of its query, it also includes the cursor.
The issue is that the server has no way of knowing which args the client might be applying to a connection when it's generating the edge response.
Does this mean that the cursor should be stable?

In general, cursors are not required to be the same when connections are used with different arguments. For example, if I did:
{
namedFriends: friends(orderby:NAME first:5) {
edges { cursor, node { id } }
}
favoriteFriends: friends(orderby:FAVORITE first:5) {
edges { cursor, node { id } }
}
}
Different backends might be use to server those two connections, since we might have different backends for the two orderings; because of that, the cursors might be different for the same friend, because they might need to encode different information for the different backends.
This makes it tricky when performing a mutation, though:
mutation M {
addFriend($input) {
newFriendsEdge {
{ cursor, node { id } } // Which cursor is this?
}
}
}
In cases like this, where the mutation is going to return an edge from a connection, it's useful for the field to accept the same non-pagination arguments that the connection does. So in the above case, we would do:
mutation M {
addFriend($input) {
newNamedFriendsEdge: newFriendsEdge(orderby:NAME) {
{ cursor, node { id } } // Cursor for namedFriends
}
newFavoriteFriendsEdge: newFriendsEdge(orderby:FAVORITE) {
{ cursor, node { id } } // Cursor for favoriteFriends
}
}
}
And ideally, the implementation for newFriendsEdge(orderby:FAVORITE) and favoriteFriends: friends(orderby:FAVORITE first:5) share common code to generate cursors.
Note that while the cursors are not required to be the same, it's fine if they are, as an implementation detail of the server. Often, the cursor is just the ID of the node, which is a common way for this to happen. In practice, in these situations, if a argument on the connections doesn't affect the cursor, we would omit it from the mutation's edge field; so if orderby didn't affect the cursor, then:
mutation M {
addFriend($input) {
newFriendsEdge {
{ cursor, node { id } } // orderby didn't exist on newFriendsEdge, so this cursor must apply to both.
}
}
}
This is the common pattern in our mutations. Let me know if you run into any issues; we thought through the "arguments change cursors" case when developing the pattern of returning edges on mutations to make sure there was a possible solution to it (which is when we came up with the arguments on edge fields idea), but it hasn't come up in practice all that much, so if you run into trickiness definitely let me know, and we can and should revisit these assumptions / requirements!

Related

Any way to split up multiple Fragment expansions for a GraphQL query into multiple calls?

Context
This problem is likely predicated on certain choices, some of which are changeable and some of which are not. We are using the following technologies and frameworks:
Relay / React / TypeScript
ContentStack (CMS)
Problem
I'm attempting to create a highly customizable page that can be built from multiple kinds of UI components based on the data presented to it (to allow pages to be built using a CMS using prefab UI in an unpredictable order).
My first attempt at this was to create a set of fragments for the potential UI components that may be referenced in an array:
query CustomPageQuery {
title
description
customContentConnection {
edges {
node {
... HeroFragment
... TweetBlockFragment
... EmbeddedVideoFragment
"""
Further fragments are added here as we add more kinds of UI
"""
}
}
}
}
In the CMS we're using (ContentStack), the complexity of this query has grown to the point that it is rejected because it requires too many calls to the database in a single query. For that reason, I'm hoping there's a way I can split up the calls for the fragments so that they are not part of the initial query, or some similar solution that results in splitting up this query into multiple pieces.
I was hoping the #defer directive would solve this for me, but it's not supported by relay-compiler.
Any ideas?
Sadly #defer is still not a standard so it is not supported by most implementation (since you would also need the server to support it).
I am not sure if I understand the problem correctly, but you might want to look more toward using #skip or #include to only fetch the fragment you need depending on the type of the thing. But it would require the frontend to know what it wants to query beforehand.
query CustomPageQuery($hero: Boolean, $tweet: Boolean, $video: Boolean) {
title
description
customContentConnection {
edges {
node {
... HeroFragment #include(if: $hero)
... TweetBlockFragment #include(if: $tweet)
... EmbeddedVideoFragment #include(if: $video)
}
}
}
}
Generally you want to be able to discriminate the type without having to do a database query. So say:
type Hero {
id: ID
name: String
}
type Tweet {
id: ID
content: String
}
union Content = Hero | Tweet
{
Content: {
__resolveType: (parent, ctx) => {
// That should be able to resolve the type without a DB query
},
}
}
Once that is passed, each fragment is then resolved, making more database queries. If those are not properly batched with dataloaders then you have a N+1 problem. I am not sure how much control (if at all) you have on the backend but there is no silver bullet for your problem.
If you can't make optimizations on the backend then I would suggest trying to limit the connection. They seem to be using cursor based pagination, so you start with say first: 10 and once the first batch is returned, you can query the next elements by setting the after to the last cursor of the previous batch:
query CustomPageQuery($after: String) {
customContentConnection(first: 10, after: $after) {
edges {
cursor
node {
... HeroFragment
... TweetBlockFragment
... EmbeddedVideoFragment
}
}
pageInfo {
hasNextPage
}
}
}
As a last resort, you could try to first fetch all the IDs and then do subsequent queries to the CMS for each id (using aliases I guess) or type (if you can filter on the connection field). But I feel dirty just writing it so avoid it if you can.
{
one: node(id: "UUID1") {
... HeroFragment
... TweetBlockFragment
... EmbeddedVideoFragment
}
two: node(id: "UUID2") {
... HeroFragment
... TweetBlockFragment
... EmbeddedVideoFragment
}
}

How to ignore unknown enum values?

I'm wondering what would be the best way to ignore/discard the unknown enum values in GraphQL/Apollo server.
Let's say my GraphQL schema defines array of enums "enum Service { Supermarket, TicketSales }" and it works fine now, but later on other service I'm using is adding some new values (e.g. Playground) and my client just doesn't support it and I would just like to ignore it and return the supported values without error.
What would be the best way to do this in GraphQL. My first idea was to make directive that would read the supported values from schema and ignore everything else, but after googling around I didn't find any good examples how to do it. Can you point me a direction where to go about this?
If your resolver function will accept arbitrary strings, then you can use a custom scalar type, or just String.
"""
The type of a service. `Supermarket` means..., and
`TicketSales` means...; any other value is ignored.
"""
scalar Service
GraphQL generally places responsibility on the client to conform to the server's expectations, rather than making the server try to support any request. There are a couple of places you can reasonably expect an enum value like this to appear:
enum Service { Supermarket, TicketSales }
type Query {
inAReturnValue: Service!
asAQueryParam(service: Service!): Node
}
type Mutation {
asAMutationInput(service: Service!): Node
}
In particular it may not make sense to tell the server "make the type of this object be a playground" if the server just doesn't understand that. Conversely, if the server knows about "playground", it could return it in cases the client may not expect. Having an enum here makes it explicit what the server knows about. The server has said what it supports and it's the client's responsibility to cooperate.
Note that it's possible for the client to find out if the server supports playgrounds, if it's an enum value, and this might help it inform its behavior.
query GetServiceTypes {
__type(name: "Service") {
enumValues { name }
}
}
After playing around I found something that I can use to get around my original problem, so I will post it here in case somebody else is wondering the same thing.
So my original problem was in short that I'm receiving several different "available services" kind of string arrays from another services and I was thinking to map them to enum for better typescript support etc. But the problem was that if I get some unknown value from another service, my graphql will fail.
So my original idea was to fix it with directive which I after all got working:
# In schema
directive #mapUnknownTo(value: String) on ENUM
enum SomeAttribute #mapUnknownTo(value: "__UNKNOWN__") {
SomeAttribute1
AnotherAttribute
SomethingElse
__UNKNOWN__
}
And the directive implementation is:
import { SchemaDirectiveVisitor } from 'graphql-tools';
import { GraphQLEnumType } from 'graphql';
export class MapUnknownToDirective extends SchemaDirectiveVisitor {
visitEnum(type: GraphQLEnumType) {
const { value = '__UNKNOWN__' } = this.args;
const valueMap = type.getValues().reduce((map, v) => map.set(v.value, v.name), new Map<string, string>());
type.serialize = (v: string): string => valueMap.get(v) || value;
}
}
So this will map all the values not defined in schema into some custom value, which is not exactly what I originally wanted, but at least it's not giving an error, so it's okay-ish.
I'm still not 100% sure if directives are way to go on cases like this, but at least it's one possible solution.

Creating an optimistic response for a reordered connection

What is the best way to 'reorder' a connection in RelayJS?
In my user interface, I allow my user to 'swap' two items, but creating a mutation around that is a bit tricky.
What I'm doing right now is the naive way, namely using FIELDS_CHANGE to change my node itself.
It works, but the problem is I can't seem to write an optimistic update for it. I am able to just pass a list of ids to my graphql server, but that doesn't work for the optimistic update because it expects the actual data.
So I guess I have to mock out my 'connection' interface, but unfortunately, it still doesn't work. I 'copied' my reordered nodes to getOptimisticResponse but it seems to be ignored. The data matches the actual server response. (ids simplified)
original:
{
item: {
edges: {
{cursor: 1, node: {id:2}}
{cursor: 2, node: {id:1}}
}
}
}
(doesn't do anything):
optimistic reponse:
{
item: {
edges: {
node: {id:1}
node: {id:2}
}
}
}
server reponse:
{
item: {
edges: {
{cursor: 1, node: {id:1}}
{cursor: 2, node: {id:2}}
}
}
}
What gives? It's equivalent (except for the cursor), and even if I add the cursor in, it still doesn't work.
What am I doing wrong? Also is there an easier way to do mock my ids to a connection?
Also, as an aside, is there a way to get this data piecemeal? Right now, reordering two item re-requests the whole list because of my mutation config. I suppose I can do it with RANGE_ADD, and RANGE_DELETE to 'simulate a swap` but is there any easier way to do it?
Since you trigger a mutation in response to the user reordering the items, I assume, you store the position or order of the items on the server side. For what you're doing, one way of creating optimistic response can be using that position or order information. On the server side, an item needs to provide an additional field position. On the client side, the items displayed are sorted by position.
When the user swaps two items, in the optimistic response of your client-side mutation, you just need to swap the position fields of those two items. The same applies on the server-side mutation.
The optimistic response code can be like:
getOptimisticResponse() {
return {
item1: {
id: this.props.item1.id,
position: this.props.item2.position,
},
item2: {
id: this.props.item2.id,
position: this.props.item1.position,
},
};
}

jsPlumb: Avoiding multiple connectors between divs

Experts,
I'm using jsPlumb to connect divs with multiple endpoints. I'm trying to prevent multiple connectors between the same two divs. So for example, if I have divs A, B, C, connectors between A and B, A and C are ok but not two connectors between A and B (using different endpoints).
Does anyone know how to do this?
Thanks!
It depends a lot of your code, but you can do something like this:
var exampleEndpoint = {
reattach:true,
scope:"yourScope",
isSource: true,
isTarget:true,
beforeDrop:function(params) { return !checkExistingLinks(params.sourceId,params.targetId); }
};
function checkExistingLinks(sourceId, targetId) {
var flag = false;
jsPlumb.select({source: sourceId}).each(function(connection) {
if(connection.targetId === targetId) {
flag = true;
}
});
return flag;
}
That is, you need to identify a connection attempt and then verify if the elements involved already have a connection. jsPlumb.select() returns for you a list (or map) of this connections.

Restrict Google Places Autocomplete to return addresses only

autocomplete = new google.maps.places.Autocomplete(input, { types: ['geocode'] });
returns streets and cities amongst other larger areas. Is it possible to restrict to streets only?
This question is old, but I figured I'd add to it in case anyone else is having this issue. restricting types to 'address' unfortunately does not achieve the expected result, as routes are still included. Thus, what I decided to do is loop through the result and implement the following check:
result.predictions[i].types.includes('street_address')
Unfortunately, I was surprised to know that my own address was not being included, as it was returning the following types: { types: ['geocode', 'premise'] }
Thus, I decided to start a counter, and any result that includes 'geocode' or 'route' in its types must include at least one other term to be included (whether that be 'street_address' or 'premise' or whatever. Thus, routes are excluded, and anything with a complete address will be included. It's not foolproof, but it works fairly well.
Loop through the result predictions, and implement the following:
if (result.predictions[i].types.includes('street_address')) {
// Results that include 'street_address' should be included
suggestions.push(result.predictions[i])
} else {
// Results that don't include 'street_address' will go through the check
var typeCounter = 0;
if (result.predictions[i].types.includes('geocode')) {
typeCounter++;
}
if (result.predictions[i].types.includes('route')) {
typeCounter++;
}
if (result.predictions[i].types.length > typeCounter) {
suggestions.push(result.predictions[i])
}
}
I think what you want is { types: ['address'] }.
You can see this in action with this live sample: https://developers.google.com/maps/documentation/javascript/examples/places-autocomplete (use the "Addresses" radio button).

Resources