Problems with cacheing in apollo client - apollo-client

I have some problems with Apollo cache, ill describe the scenario:
I have a query,QueryOne that returns a data with a key named "getUserData"
And another query, queryTwo that returns different data with the same key named "getUserData", both responses may have the same key but the values are different.
That behavior causes my cache to get overridden, as far as I see Apollo saves key-value and not key-value per query.
I need Apollo cache to separate those two cached objects and understand they key might be the same but the response is different and is per query.
Can somebody point me to the right direction? Ive read the docs.

Graphql caches queries that have the same variables. For example
const {data} = useQuery(ALL_THINGS)
Calling the above query multiple times will return the cached results. If you add variables, and those variables are different, it will return normally.
You can force the query to ignore cache with this:
const {data} = useQuery(ALL_THINGS, {
fetchPolicy: 'network-only', // Ignore cache
})

Related

Auto-update of apollo client cache after mutation not affecting existing queries

I have a mutation (UploadTransaction) returning certain list of certain object named Transaction.
#import "TransactionFields.gql"
mutation UploadTransaction($files: [Upload!]!) {
uploadFile(files: $files){
transactions {
...TransactionFields
}
}
}
Transaction returned from backend (graphene) has id and typename field. Hence it should automatically update Transaction in the cache. In chrome dev tools for Apollo, I can see new transactions:
I also have a query GetTransactions fetching all Transaction objects.
#import "TransactionFields.gql"
query GetTransactions {
transactions {
...TransactionFields
}
}
However I don't see newly added Transaction being returned by the query. During initial load, Apollo client loaded 292 transactions which it shows under ROOT_QUERY. It keeps returning same 292 transactions. UploadTransaction mutation add new object of type "Transaction" in cache in dev-tools without affecting ROOT_QUERY in dev-tools or my query in code.
TransactionFields.gql is
fragment TransactionFields on Transaction {
id
timestamp
description
amount
category {
id
name
}
currency
}
Any idea what am I doing wrong? I am new to apollo client and graphql
From the docs:
If a mutation updates a single existing entity, Apollo Client can automatically update that entity's value in its cache when the mutation returns. To do so, the mutation must return the id of the modified entity, along with the values of the fields that were modified. Conveniently, mutations do this by default in Apollo Client...
If a mutation modifies multiple entities, or if it creates or deletes entities, the Apollo Client cache is not automatically updated to reflect the result of the mutation. To resolve this, your call to useMutation can include an update function.
If you have a query that returns a list of entities (for example, users) and then create or delete a user, Apollo has no way of knowing that the list should be updated to reflect your mutation. The reason for this is two fold
There's no way for Apollo to know what a mutation is actually doing. All it knows is what fields you are requesting and what arguments you are passing those fields. We might assume that a mutation that includes words like "insert" or "create" is inserting something on the backend but that's not a given.
There's no way to know that inserting, deleting or updating a user should update a particular query. Your query might be for all users with the name "Bob" -- if you create a user with the name "Susan", the query shouldn't be updated to reflect that addition. Similarly, if a mutation updates a user, the query might need to be updated to reflect the change. Whether it should or not ultimately boils down to business rules that only your server knows about.
So, in order to update the cache, you have two options:
Trigger a refetch of the relevant queries. You can do this by either passing a refetchQueries option to your useMutation hook, or by manually calling refetch on those queries. Since this requires one or more additional requests to your server, it's the slower and more expensive option but can be the right option when A) you don't want to inject a bunch of business logic into your client or B) the updates to the cache are complicated and extensive.
Provide an update function to your useMutation hook that tells Apollo how to update the cache based on the results of the mutation. This saves you from making any additional requests, but does mean you have to duplicate some business logic between your server and your client.
The example of using update from the docs:
update (cache, { data: { addTodo } }) {
const { todos } = cache.readQuery({ query: GET_TODOS });
cache.writeQuery({
query: GET_TODOS,
data: { todos: todos.concat([addTodo]) },
});
}
Read the docs for additional details.

I don't get GraphQL. How do you solve the N+1 issue without preloading?

A neighborhood has many homes. Each home is owned by a person.
Say I have this graphql query:
{
neighborhoods {
homes {
owner {
name
}
}
}
}
I can preload the owners, and that'll make the data request be a single SQL query. Fine.
But what if I don't request the owner in the graphql query, the data will still be preloaded.
And if I don't preload, the data will either be fetched in every query, or not at all since I'm not loading the belongs_to association in the resolver.
I'm not sure if this is a solved issue, or just a painpoint one must swallow when working with graphql.
Using Absinthe, DataLoader and Elixir by the way.
Most GraphQL implementations, including Absinthe, expose some kind of "info" parameter that contains information specific to the field being resolved and the request being executed. You can parse this object to determine which fields were actually requested and build your SQL query appropriately.
See this issue for a more in-depth discussion.
In order to complement what Daniel Rearden said, you have to use the info.definition to resolve nested includes.
In my application I defined an array of possible values like:
defp relationships do
[
{:person, [tasks: [:items]]]}
...
]
end
then I have a logic that iterates over the info.definition and uses this function to preload the associations.
You will use a DataLoader to lazy load your resources. Usually to fetch third party requests or perform a complex database query.

Update Apollo cache after object creation

What are all the different ways of updating the Apollo InMemoryCache after a mutation? From the docs, I can see:
Id-based updates which Apollo performs automatically
Happens for single updates to existing objects only.
Requires an id field which uniquely identifies each object, or the cache must be configured with a dataIdFromObject function which provides a unique identifier.
"Manual" cache updates via update functions
Required for object creation, deletion, or updates of multiple objects.
Involves calling cache.writeQuery with details including which query should be affected and how the cache should be changed.
Passing the refetchQueries option to the useMutation hook
The calling code says which queries should be re-fetched from the API, Apollo does the fetching, and the results replace whatever is in the cache for the given queries.
Are there other ways that I've missed, or have I misunderstood anything about the above methods?
I am confused because I've been reading the code of a project which uses Apollo for all kinds of mutations, including creations and deletions, but I don't see any calls to cache.writeQuery, nor any usage of refetchQueries. How does the cache get updated after creations and deletions without either of those?
In my own limited experience with Apollo, the cache is not automatically updated after an object creation or deletion, not even if I define dataIdFromObject. I have to update the cache myself by writing update functions.
So I'm wondering if there is some secret config I've missed to make Apollo handle it for me.
The only way to create or delete a node and have Apollo automatically update the cache to reflect the change is to return the parent field of whatever field contains the updated List field. For example, let's say we have a schema like this:
type Query {
me: User
}
type User {
id: ID!
posts: [Post!]!
}
type Post {
id: ID!
body: String!
}
By convention, if we had a mutation to add a new post, the mutation field would return the created post.
type Mutation {
writePost(body: String!): Post!
}
However, we could have it return the logged in User instead (the same thing the me field returns):
type Mutation {
writePost(body: String!): User!
}
by doing so, we enable the client to make a query like:
mutation WritePost($body: String!){
writePost(body: $body) {
id
posts {
id
body
}
}
}
Here Apollo will not only create or update the cache for all the returned posts, but it will also update the returned User object, including the list of posts.
So why is this not commonly done? Why does Apollo's documentation suggest using writeQuery when adding or deleting nodes?
The above will work fine when your schema is simple and you're working with a relatively small amount of data. However, returning the entire parent node, including all its relations, can be noticeably slower and more resource-intensive once you're dealing with more data. Additionally, in many apps a single mutation could impact multiple queries inside the cache. The same node could be returned by any number of fields in the schema, and even the same field could be part of a number of different queries that utilize different filters, sort parameters, etc.
These factors make it unlikely that you'll want to implement this pattern in production but there certainly are use cases where it may be a valid option.

Does GraphQL ever redundantly visit fields during execution?

I was reading this article and it used the following query:
{
getAuthor(id: 5){
name
posts {
title
author {
name # this will be the same as the name above
}
}
}
}
Which was parsed and turned into an AST like the one below:
Clearly it is bringing back redundant information (the Author's name is asked for twice), so I was wondering how GraphQL Handles that. Does it redundantly fetch that information? Is the diagram a proper depiction of the actual AST?
Any insight into the query parsing and execution process relevant to this would be appreciated, thanks.
Edit: I know this may vary depending on the actual implementation of the GraphQl server, but I was wondering what the standard / best practice was.
Yes, GraphQL may fetch the same information multiple times in this scenario. GraphQL does not memoize the resolver function, so even if it is called with the same arguments and the same parent value, it will still run again.
This is a fairly common problem when working with databases in GraphQL. The most common solution is to utilize DataLoader, which not only batches your database requests, but also provides a cache for those requests for the duration of the GraphQL request. This way, even if a particular record is requested multiple times, it will only be fetched from the database once.
The alternative (albeit more complicated) approach is to compose a single database query based on the requested fields that executes at the root level. For example, our resolver for getAuthor could constructor a single query that would return the author, their posts and each of that post's author. With this approach, we can skip writing resolvers for the posts field on the Author type or the author field on the Post type and just utilize the default resolver behavior. However, in order to do this and avoid overfetching, we have to parse the GraphQL request inside the getAuthor resolver in order to determine which fields were requested and should therefore be included in our database query.

GraphQL: Can you mutate the results of a query?

In writing this question I realised that there is something very specific I want to be able to do in GraphQL, and I can't see a good way of implementing it. The idea is this:
One of the nice things about GraphQL is that it allows you to make flexible queries. For example, if I want to find all the comments on all the posts of each user in a particular forum then I can make the query
query{
findForum(id:7){
users{
posts{
comments{
content
}
}
}
}
}
which is great. Often, you want to collect data with the intention of mutating it. So in this case, maybe I don't want to fetch all of those comments, and instead I want to delete them. A naive suggestion is to implement a deleteComment field on the comment type, which mutates the object it is called on. This is bad because the request is tagged as a query, so it should not mutate data.
Since we're mutating data, we should definitely tag this as a mutation. But then we lose the ability to make the query we wanted to make, because findForum is a query field, not a mutation field. A way around this might be to redefine all the query fields you need inside the mutation type. This is obviously not a good idea, because you repeat a lot of code, and also make the functionality for query a strict subset of that of mutation.
Now, what I regard as the 'conventional' solution is to make a mutation field which does this job and nothing else. So you define a mutation field deleteAllUserPostCommentsByForum which takes an argument, and implement it in the obvious way. But now you've lost the flexibility! If you decide instead that you want to find the user explicitly, and delete all their posts, or if you only want to delete some of their posts, you need a whole new mutation field. This feels like precisely the sort of thing I though GraphQL was useful for when compared to REST.
So, is there a good way to avoid these problems simultaneously?
Under the hood, the only real difference between queries and mutations is that if a single operation includes multiple mutations, they are resolved sequentially (one at a time) rather than concurrently. Queries, and all other fields are resolved concurrently. That means for an operation like this:
mutation myOperation {
editComment(id: 1, body: "Hello!")
deleteComment(id: 1)
}
The editComment mutation will resolve before the deleteComment mutation. If these operations were queries, they would both be ran at the same time. Likewise, consider if you have a mutation that returns an object, like this:
mutation myOperation {
deleteComment(id: 1) {
id
name
}
}
In this case, the id and name fields are also resolved at the same time (because, even though they are returned as part of a mutation, the fields themselves are not mutations).
This difference in behavior between queries and mutations highlights why by convention we define a single mutation per operation and avoid "nesting" mutations like your question suggests.
The key to making your mutations more flexible lies in how you pass in inputs to your mutation subsequently how you handle those inputs inside your resolver. Instead of making a deleteAllUserPostCommentsByForum mutation, just make a deleteComments mutation that accepts a more robust InputType, for example:
input DeleteCommentsInput {
forumId: ID
userId: ID
}
Your resolver then just needs to handle whatever combination of input fields that may be passed in. If you're using a db, this sort of input very easily translates to a WHERE clause. If you realize you need additional functionality, for example deleting comments before or after a certain date, you can then add those fields to your Input Type and modify your resolver accordingly -- no need to create a new mutation.
You can actually handle creates and edits similarly and keep things a little DRY-er. For example, your schema could look like this:
type Mutation {
createOrUpdateComment(comment: CommentInput)
}
input CommentInput {
id: ID
userId: ID
body: String
}
Your resolver can then check whether an ID was included -- if so, then it treats the operation as an update, otherwise it treats the operation as an insert. Of course, using non-nulls in this case can get tricky (userId might be needed for a create but not an update) so there's something to be said for having separate Input Types for each kind of operation. However, hopefully this still illustrates how you can leverage input types to make your mutations more flexible.
IMHO you lose many indirect aspects.
Trying to create 'flexible' query can result in highly unoptimized server actions.
Queries are resolved structurally, level by level, which may result in processing to many unnecessary data (high memory usage). It can't be optimized on lower layers (f.e. sql server) - it will result in a naive implementation (processing) like many 'manually fired' SQL queries vs. one more complex query with conditions.
In this case f.e. server doesn't need all users at all while user's post/comment usually contain user_id (and forum/thread/post ids) field - it can be processed directly on one table (with joined posts). You don't need the whole structure to affect only some of the elements.
The real power and flexibility of graphQL are placed on the resolvers.
Notice that deleting all or only some comments can be completely different implemented. Resolver can choose a better way (by parameters as Daniel wrote) but for simplicity (readability of the API) it would be better to have a separate mutations.

Resources