Apollo client cache doesn't work as I excpected - graphql

My problem:
I am pretty new to GraphQL and I am developing my first full stack app using Apollo server and client, which is a simple blog.
On client side, I am using the same query in two different pages, but with different variables. Queries are querying a blog article by ID or slug, depending on the page I am using it. So the result is the same, there is just the queries variables that changes.
When I use a query in one page, I thought query wouldn't run on the second page because of Apollo cache. But it is not what is happening. The query runs again in the second, and of course returns me the same result that in the other page.
Why does Apollo doesn't use the cache in this case?
Here is the code I use :
On server side, I have a pretty basic query to fetch an article from a blog, which can be fetched by ID or Slug:
type Query {
...
article(id: ID, slug: String): Article
...
}
On client side, I query an article by slug if the article is published, or by ID when it is still a draft.
The query by slug:
<Query
query={article}
variables={{ slug }}
fetchPolicy="cache-and-network"
>
{({ loading, error, data }) => {
return (
<Article
loading={loading}
article={data && data.article}
/>
);
}}
</Query>
The query by ID is the same, except the variables param which uses ID:
<Query
query={article}
variables={{ id }}
>
{({ loading, error, data }) => {
return (
<EditArticle loading={loading} article={data && data.article} />
);
}}
</Query>
As you can see, both are using the same GraphQL endpoint, and the result is the same. But the cache is not used.

Apollo assumes that your resolvers are pure (they don't have side effects and formostly return the same result given the same input/arguments). This is already a lot to assume. Imagine a resolver that returns a random number or the newest comment on a news website. Both would not always return the same result given the same input. On the other hand Apollo does not make - and pretty much cannot make - assumptions about the implementation of your resolver. While in your head the implementation for your article resolver is obvious (if the id is present return article with that id, if slug is present return article with that slug) this is a lot to ask from a computer programm to guess.
I have answered a similar question recently. To prevent the second query from running you have to implement a cache redirect. The downside is that you have to keep your cache redirects on the client and resolvers on the server in sync.

I have hit this same problem. In essence, I expected the cache lookup to simply fail when it attempts a look up with the "slug" only, and I was fine with that, but instead it fails to generate a correct search result and a "null" result is return as the query response as though it were a successful query response. Oops.
In order to avoid side-effects, I will just be using a separate graphQL query which accepts a slug instead of an ID. This has a couple other benefits, for instance I can enforce the field as "required" in their respective queries. Main thing is that it makes the ID-based query more deterministic and thus more compatible with caching.
type Query {
...
article(id: ID!): Article
articleBySlug(slug: String!): Article
...
}
Even better would be the ability to search the cache using your "slug" value for a matching result but this doesn't seem to be supported yet without using the "slug" as part of the cache ID itself.

Related

Apollo GraphQL cache for common data in different queries?

I have a web based application with two graphql queries that have some data in common. The first query FullProject is more or less a very broad "lets pull all data that the client might need" and contains many nested resources. For this question the important thing is that it also pulls in loads of users:
query FullProject($id: ID!) {
projects(input: {filter: {id: $id}}) {
nodes {
id
name
relatedUsers {
id
name
}
# Many more
}
}
}
The second query is used to populate a list of users:
query NameUser($id: ID!) {
users(input: {filter: {id: $id}}) {
nodes {
id
name
}
}
}
When I check the GraphQL cache (using the Apollo Developer tools) after running FullProject I can see that the data has been properly normalized and I have entries like:
User:1
name:A
---
User:2
name:B
When I however run the NameUser query this always results in one new request for each user. After the first request for a user the cache properly kicks in, but this still means that I am ending up with possibly hundreds of queries for data that is technically already part of the cache (albeit via a different query). I was hoping that the Apollo Client would be able to leverage the cache even for different top-level queries. Am I doing something wrong or is my assumption incorrect?

Auto-update of apollo client cache after mutation not affecting existing queries

I have a mutation (UploadTransaction) returning certain list of certain object named Transaction.
#import "TransactionFields.gql"
mutation UploadTransaction($files: [Upload!]!) {
uploadFile(files: $files){
transactions {
...TransactionFields
}
}
}
Transaction returned from backend (graphene) has id and typename field. Hence it should automatically update Transaction in the cache. In chrome dev tools for Apollo, I can see new transactions:
I also have a query GetTransactions fetching all Transaction objects.
#import "TransactionFields.gql"
query GetTransactions {
transactions {
...TransactionFields
}
}
However I don't see newly added Transaction being returned by the query. During initial load, Apollo client loaded 292 transactions which it shows under ROOT_QUERY. It keeps returning same 292 transactions. UploadTransaction mutation add new object of type "Transaction" in cache in dev-tools without affecting ROOT_QUERY in dev-tools or my query in code.
TransactionFields.gql is
fragment TransactionFields on Transaction {
id
timestamp
description
amount
category {
id
name
}
currency
}
Any idea what am I doing wrong? I am new to apollo client and graphql
From the docs:
If a mutation updates a single existing entity, Apollo Client can automatically update that entity's value in its cache when the mutation returns. To do so, the mutation must return the id of the modified entity, along with the values of the fields that were modified. Conveniently, mutations do this by default in Apollo Client...
If a mutation modifies multiple entities, or if it creates or deletes entities, the Apollo Client cache is not automatically updated to reflect the result of the mutation. To resolve this, your call to useMutation can include an update function.
If you have a query that returns a list of entities (for example, users) and then create or delete a user, Apollo has no way of knowing that the list should be updated to reflect your mutation. The reason for this is two fold
There's no way for Apollo to know what a mutation is actually doing. All it knows is what fields you are requesting and what arguments you are passing those fields. We might assume that a mutation that includes words like "insert" or "create" is inserting something on the backend but that's not a given.
There's no way to know that inserting, deleting or updating a user should update a particular query. Your query might be for all users with the name "Bob" -- if you create a user with the name "Susan", the query shouldn't be updated to reflect that addition. Similarly, if a mutation updates a user, the query might need to be updated to reflect the change. Whether it should or not ultimately boils down to business rules that only your server knows about.
So, in order to update the cache, you have two options:
Trigger a refetch of the relevant queries. You can do this by either passing a refetchQueries option to your useMutation hook, or by manually calling refetch on those queries. Since this requires one or more additional requests to your server, it's the slower and more expensive option but can be the right option when A) you don't want to inject a bunch of business logic into your client or B) the updates to the cache are complicated and extensive.
Provide an update function to your useMutation hook that tells Apollo how to update the cache based on the results of the mutation. This saves you from making any additional requests, but does mean you have to duplicate some business logic between your server and your client.
The example of using update from the docs:
update (cache, { data: { addTodo } }) {
const { todos } = cache.readQuery({ query: GET_TODOS });
cache.writeQuery({
query: GET_TODOS,
data: { todos: todos.concat([addTodo]) },
});
}
Read the docs for additional details.

Update Apollo cache after object creation

What are all the different ways of updating the Apollo InMemoryCache after a mutation? From the docs, I can see:
Id-based updates which Apollo performs automatically
Happens for single updates to existing objects only.
Requires an id field which uniquely identifies each object, or the cache must be configured with a dataIdFromObject function which provides a unique identifier.
"Manual" cache updates via update functions
Required for object creation, deletion, or updates of multiple objects.
Involves calling cache.writeQuery with details including which query should be affected and how the cache should be changed.
Passing the refetchQueries option to the useMutation hook
The calling code says which queries should be re-fetched from the API, Apollo does the fetching, and the results replace whatever is in the cache for the given queries.
Are there other ways that I've missed, or have I misunderstood anything about the above methods?
I am confused because I've been reading the code of a project which uses Apollo for all kinds of mutations, including creations and deletions, but I don't see any calls to cache.writeQuery, nor any usage of refetchQueries. How does the cache get updated after creations and deletions without either of those?
In my own limited experience with Apollo, the cache is not automatically updated after an object creation or deletion, not even if I define dataIdFromObject. I have to update the cache myself by writing update functions.
So I'm wondering if there is some secret config I've missed to make Apollo handle it for me.
The only way to create or delete a node and have Apollo automatically update the cache to reflect the change is to return the parent field of whatever field contains the updated List field. For example, let's say we have a schema like this:
type Query {
me: User
}
type User {
id: ID!
posts: [Post!]!
}
type Post {
id: ID!
body: String!
}
By convention, if we had a mutation to add a new post, the mutation field would return the created post.
type Mutation {
writePost(body: String!): Post!
}
However, we could have it return the logged in User instead (the same thing the me field returns):
type Mutation {
writePost(body: String!): User!
}
by doing so, we enable the client to make a query like:
mutation WritePost($body: String!){
writePost(body: $body) {
id
posts {
id
body
}
}
}
Here Apollo will not only create or update the cache for all the returned posts, but it will also update the returned User object, including the list of posts.
So why is this not commonly done? Why does Apollo's documentation suggest using writeQuery when adding or deleting nodes?
The above will work fine when your schema is simple and you're working with a relatively small amount of data. However, returning the entire parent node, including all its relations, can be noticeably slower and more resource-intensive once you're dealing with more data. Additionally, in many apps a single mutation could impact multiple queries inside the cache. The same node could be returned by any number of fields in the schema, and even the same field could be part of a number of different queries that utilize different filters, sort parameters, etc.
These factors make it unlikely that you'll want to implement this pattern in production but there certainly are use cases where it may be a valid option.

In my query, could I use the result of a parameter to get more info in that query?

Forgive my terribly-worded question but here's some code to explain what I'm trying to do (slug and value are provided outside this query):
const query = `{
post(slug: "${slug}") {
content
createdAt
id <--- I want this id for my reply query
slug
}
reply(replyTo: "id") { <--- The second query in question
content
createdAt
id
slug
}
user(id: "${value}") {
username
}
}`;
I just got started with GraphQL and I'm loving the fact that I can query multiple databases in one go. It'd be great if I could also perform some "queryception" but I'm not sure if this is possible.
When thinking in terms of GraphQL, it's important to remember that each field for a given type is resolved by GraphQL simultaneously.
For example, when your post query returns a Post type, GraphQL will resolve the content and createdAt fields at the same time. Once those fields are resolved, it moved on to the next "level" of the query (for example, if content returned a type instead of a scalar, it would then try to resolve those fields.
Each of your individual queries (post, reply, and user) are actually fields of the Root Query type, and the same logic applies to them as well. That means there's no way to reference the id returned by post within reply -- both queries will be fired off at the same time.
An exception to the above exists in the form of mutations, which are actually resolved sequentially instead of simultaneously. That means, even though you still wouldn't be able to use the result of post as a variable inside your reply query, you could use context to pass the id from one to the other if both were mutations. This, however, is very hackish and requires the client to request the mutations in a specific order.
A more viable solution would be to simply handle this on the client side by breaking it up into two requests, and waiting to fire the second until the first one returns.
Lastly, you may consider reworking your schema to prevent having to have multiple queries in the first place. For example, your Post type could simply have a replies field that would resolve to all replies that correspond with the returned post's id.

GraphQL pre-approved queries

I read that Facebook's internal servers accept any queries in dev
mode, and these are cached. In production, only a pre-approved/cached
query is permitted. This was mentioned as a model which other servers
should adopt.
Does someone know what tools do they use for that? Does this process is described more detailed somewhere?
I don't know how it's down in facebook but I can explain how I did it in GraphQL Guru. As graphql is language agnostic I'll explain without being language specific.
The way persisted queries work is a client sends a query with a unique id and variables to a graphql (persisted query ready) server.
{
"id": "1234",
"varibles": {
"firtName": "John",
"lstName": "Smith"
}
}
For the id don't use a hash of the query as the this results in long id names of varying sizes, which kind of defeats the purpose.
On your server, create a file with the same name as the persisted query id, which contains the actual graphql query. Or save it in a database.
To get the graphql query you will need to intercept it via middleware. The middleware retrieves the graphql query via its id and passes the query on to the graphql endpoint. Depending on how the query was defined the middleware may need to parse it. Also, it is in the middleware where you can whitelist if the persisted query id does not exist.
Then the graphql endpoint process the query as normal.
You can see a nodejs example here https://github.com/otissv/guru-express-server/blob/master/src/routes/graphql-route.js

Resources