Apollo query does not return cached data available using readFragment - graphql

I have 2 queries: getGroups(): [Group] and getGroup($id: ID!): Group. One component first loads all groups using getGroups() and then later on a different component needs to access a specific Group data by ID.
I'd expect that Apollo's normalization would already have Group data in cache and would use it when getGroup($id: ID!) query is executed, but that's not the case.
When I set cache-only fetchPolicy nothing is returned. I can access the data using readFragment, but that's not as flexible as just using a query.
Is there an easy way to make Apollo return the cached data from a different query as I would expect?

It's pretty common to have a query field that returns a list of nodes and another that takes an id argument and returns a single node. However, deciding what specific node or nodes are returned by a field is ultimately part of your server's domain logic.
As a silly example, imagine if you had a field like getFavoriteGroup(id: ID!) -- you may have the group with that id in your cache but that doesn't necessarily mean it should be returned by the field (it may not be favorited). There's any number of factors (other arguments execution context, etc.) that might affect what nodes(s) are returned by a field. As a client, it's not Apollo's place to make assumptions about your domain logic.
However, you can effectively duplicate that logic by implementing query redirects.
const cache = new InMemoryCache({
cacheRedirects: {
Query: {
group: (_, args) => toIdValue(cache.config.dataIdFromObject({ __typename: 'Group', id: args.id })),
},
},
});

Related

Getting 'delete impacts too many records' when deleting records using supabase

I'm currently using Supabase together with Graphql and trying to delete some data using a mutation. Unfortunately this mutation sometimes fails and gives me back an error that tells me that the delete impacts too many records. Does anyone have any idea what might be causing this?
The mutation I'm using is:
mutation UnfollowUser($followerUserId: UUID, $followingUserId: UUID) {
deleteFromfollowsCollection(filter: {follower: {eq: $followerUserId}, following: {eq: $followingUserId}}){
affectedCount
}
}
Although it does not seem to be documented anywhere, it turns out that supabase's pg_graphql has an atMost property. This can be used to limit the number of deleted records, and by default seems to be equal to 1.
Using this property we can adjust the previously decribed mutation, and allow it to delete up to 10 records at a time.
mutation UnfollowUser($followerUserId: UUID, $followingUserId: UUID) {
deleteFromfollowsCollection(filter: {follower: {eq: $followerUserId}, following: {eq: $followingUserId}}, atMost: 10){
affectedCount
}
}
Reference

Architecture for avoiding repeated data in GraphQL

I have a application where the same data is present in many places in the graph and need to optimize the data queries to avoid processing and sending the same data too often.
As an example consider the following pseudo schema:
type Group {
name: String
members: [Person]
}
type Person {
name: String
email: String
avatar: Avatar
follows: [Person]
followedBy: [Person]
contacts: [Person]
groups: [Group]
bookmarks: [Bookmark]
sentMessages: [Message]
receivedMessages: [Message]
}
type Message {
text: String
author: Person
recipients: [Person]
}
type Bookmark {
message: Message
}
Querying a users data can easily contain hundreds, if not thousands, of Person-objects even though it the small circle of friends/contacts/follows only contains tens of distict users.
In my real implementation about 80% of each GraphQL query (in bytes) is redundant and considering that the client does many different queries in the same space above 90% of all data transferred and processed is redundant.
How could I improve the model so that I don't have to load the same data again and again without complicating the client too much?
I'm using Apollo for both GraphQL client and server.
Use/implement pagination (instead of just arrays) for relations - this way you can query for count/total (render it without array processing) and array of ids only - usually there is no need to query/join person table (DB) at all.
Render list of Person components (react?) using passed id prop only ... only rendered Person fetches for more details (if not cached, use batching to merge requests) consumed/rendered inside.

How can I pass arguements to child fields in Apollo?

I'm trying to build a graphql interface that deals with data from different regions and for each region there's a different DB.
What I'm trying to accomplish is:
TypeDefs= gql`
type Player {
account_id: Int
nickname: String
clan_id:Int
clan_info:Clan
}
type Clan{
name:
}
So right now I can request player(region, id), and this will pull up the player details no issues there.
But the issue is that Clan_info field also requires the region from the parent, so the resolver would look like clan_info({clan_id}, region).
Is there any way to pass down the region from parent to child field? I know I can add it to the details of the player, but would rather not since there would be millions of records and every field counts

filter by Time in graphql (using faunaDB service)

My graphQL schema looks like this,
type Todo {
name: String!
created_at: Time
}
type Query {
allTodos: [Todo!]!
todosByCreatedAtFlag(created_at: Time!): [Todo!]!
}
This query works.
query {
todosByCreatedAtFlag(created_at: "2017-02-08T16:10:33Z") {
data {
_id
name
created_at
}
}
}
Could anyone point out how i can create greater than (or less than) Time query in graphql (using faunaDB).
GraphQL range queries are not supported (yet.. they're coming!)
FaunaDB does not provide range queries for their GraphQL out-of-the-box, we are working on these features.
.. but there is a workaround.
That doesn't mean though that it can't do range queries since range queries are supported in FQL and you can always 'escape' from GraphQL to FQL to implement more advanced queries by writing a User Defined Function (UDF).
.. using resolvers
By using the #resolver keyword in your schema you can implement GraphQL queries yourself by writing a User Defined Function in FaunaDB in FQL. There are some basic examples in the documentation bt I imagine you might need some help so I'll write you a simple example.
I added your schema and added two documents:
First thing is that our schema will be extended with the resolver:
type Todo {
name: String!
created_at: Time
}
type Query {
allTodos: [Todo!]!
todosByCreatedAtFlag(created_at: Time!): [Todo!]!
todosByCreatedRange(before: Time, after:Time): [Todo!]! #resolver
}
All this does is add a function for us to implement:
Which if we call via GraphQL gives us exactly that Abort message we saw in the screenshot before since it has not been implemented yet. But we can see that the GraphQL statement actually calls the function.
.. UDF implementation
First thing we will do is add the parameter which is just writing a name as the first parameter of the lambda:
Which also takes an array in case you need to pass multiple parameters (which I do in the resolver that I defined in the schema):
We'll add an index to support our query. Values are for ranges (and for return values and sorting). We'll add created_at to range over it and also add ref since we'll need the return value to get the actual document behind the index.
We could then start off by just writing a simple function (that won't work yet)
Query(
Lambda(
["before", "after"],
Paginate(
Range(Match(Index("todosByCreatedAtRange")), Var("before"), Var("after"))
)
)
)
and could test this by calling the function manually via the shell.
This indeed returns the two objects (range is inclusive).
Of course, there is one problem with this, it does not return the data in the structure that GraphQL expects it so we'll get these strange errors:
We can do two things now, either define a type in our Schema that fits these and/or we can adapt the data the returns. We'll do the latter and adapt our result to the expected [Todo!]! result to show you.
Step one, map over the result. The only thing we introduce here is the Map and the Lambda. We do not do anything special yet, we just return the reference instead of both the ts and the reference as an example.
Query(
Lambda(
["before", "after"],
Map(
Paginate(
Range(
Match(Index("todosByCreatedAtRange")),
Var("before"),
Var("after")
)
),
Lambda(["created_at", "ref"], Var("ref"))
)
)
)
Calling it indeed shows that the function now only returns references.
Let's get the actual documents. I know that FQL is verbose (and with good reasons, although it should become less verbose in the future) so I started adding comments to clarify things
Query(
Lambda(
["before", "after"],
Map(
// This is just the query to get your range
Paginate(
Range(
Match(Index("todosByCreatedAtRange")),
Var("before"),
Var("after")
)
),
// This is a function that will be executed on each result (with the help of Map)
Lambda(["created_at", "ref"],
// We'll use Let to structure our queries ( allowing us to use varaibles )
Let({
todo: Get(Var("ref"))
},
// And then we return something
Var("todo")))
)
)
)
Our function now returns data.. woohoo!
We still need to make sure this data is conforms to what GraphQL expects, and from the schema we can see that it expects a [Todo!]! (See docs tab) and a Todo looks like (see the schema tab):
type Todo {
_id: ID!
_ts: Long!
name: String!
created_at: Time
}
As you can also see from that docs tab is that 'non-resolver' queries are automatically changed to return TodoPages. The function we wrote so far actually return pages.
Option 1, change the schema and turn it into a paginated resolver.
We can fix this by adding the paginated: true option to the resolver. You will have to take into account for extra parameters that will be added to the resolver as explained here. I haven't tried that myself, so I'm not 100% certain how that would work. The advantage of a paginated resolve is that you can immediately take advantage of sane pagination in the GraphQL endpoint.
Option 2, turn it into a non-paginated result.
A paginated result is a result that looks as follows:
{ data: [ document1, document2, .. ],
before: ...
after: ..
}
The result doesn't accept pages but an array so I'll change it and retrieve the data field:
And we have our result.
The complete query looks as follows:
Query(
Lambda(
["before", "after"],
Select(
["data"],
Map(
Paginate(
Range(
Match(Index("todosByCreatedAtRange")),
Var("before"),
Var("after")
)
),
Lambda(
["created_at", "ref"],
Let({ todo: Get(Var("ref")) }, Var("todo"))
)
)
)
)
)
Disclaimers
Once you go custom, pagination also becomes your responsibility (e.g. pass an extra parameter). You can't fetch relations out of the box anymore as you would normally do by just requesting the relations in the GraphQL body.
Some words on the benefits of UDFs and the hybrid of GraphQL/FQL
Before you shy away from FQL (and yes, we do have to add range queries and are working on that), here is some explanation on the UDF approach in general and why it makes sense to think about it anyway.
You will at a certain moment encounter things in GraphQL that are just impossible (complex conditional transactions, e.g. update document and update this other document only if some condition that results form the previous update is true). Users that use other GraphQL implementations typically solve this by writing a serverless function in case you have to implement advanced logic or transactions.
FaunaDB's answer to this is to use their User Defined Functions (UDFs). This is not a serverless function, it's a FaunaDB function implemented in FQL which might seem cumbersome at first but it's important to realize that it gives you the same benefits ( multi-region/strong consistency/scalability/free-tier/pay-as-you-go) that FaunaDB provides.

Fetching the data optimally in GraphQL

How can I write the resolvers such that I can generate database sub-query in each resolver and effectively combine all of them and fetch the data at once?
For the following schema :
type Node {
index: Int!
color: String!
neighbors(first: Int = null): [Node!]!
}
type Query {
nodes(color: String!): [Node!]!
}
schema {
query: Query
}
To perform the following query :
{
nodes(color: "red") {
index
neighbors(first: 5) {
index
}
}
}
Data store:
In my data store, nodes and neighbors are stored in separate tables. I want to write a resolver so that we can fetch the required data optimally.
If there are any similar examples, please share the details. (It would be helpful to get an answer in reference to graphql-java)
DataFetchingEnvironment provides access to sub-selections via DataFetchingEnvironment#getSelectionSet. This means, in your case, you'd be able to know from the nodes resolver that neighbors will also be required, so you could JOIN appropriately and prepare the result.
One limitation of the current implementation of getSelectionSet is that it doesn't provide info on conditional selections. So if you're dealing with interfaces and unions, you'll have to manually collect the sub-selection starting from DataFetchingEnvironment#getField. This will very likely be improved in the future releases of graphql-java.
The recommended and most common way is to use a data loader.
A data loader collects the info about which fields to load from which table and which where filters to use.
I haven't worked with GraphQL in Java, so I can only give you directions how you could implement this yourself.
Create an instance of your data loader and pass it to your resolvers as the context argument.
Your resolvers should pass the table name, a list of field names and a list of where conditions to the data loader and return a promise.
Once all the resolvers have executed your data loader should combine those lists so you only end up with one query per table.
You should remove duplicate field names and combine the where conditions using the or keyword.
After the queries have executed you can return all of this data to your resolvers and let them filter the data (since we combined the conditions using the or keyword)
As an advanced feature your data loader could apply the where conditions before returning the data to the resolvers so that they don't have to filter them.

Resources