Migrating from useQuery option in fetchMore to field level policy for nested queries - apollo-client

I have a question regarding this comment:
The updateQuery callback for fetchMore is deprecated, and will be removed
in the next major version of Apollo Client.
Please convert updateQuery functions to field policies with appropriate
read and merge functions, or use/adapt a helper function (such as
concatPagination, offsetLimitPagination, or relayStylePagination) from
#apollo/client/utilities.
The field policy system handles pagination more effectively than a
hand-written updateQuery function, and you only need to define the policy
once, rather than every time you call fetchMore.
I noticed when dealing with nested pagination - I had to provide:
Field level policy to prevent additional network requests when performing the fetchMore request.
The policy would result in something like the following in the cache:
parent: {
...,
child({"after":"" "first":100}): {
...,
},
child({"after":"MW" "first":100}): {
...,
}
}
Also an updateQuery option, in order for the component to be remounted with the new data. Without providing an updateQuery function the react component did not seem to remount.
Is there an alternative way to remount the component after performing fetchMore?

Related

How can I specify maximum cache-time for any data matching a regex in apollo-client with InMemoryCache?

Some fields coming from the graphql server will have the shape short-lived-token-XYZ123. Ideally we wouldn't even have to know the field names ahead of time, as any code we write will live in a library. How can I hook into the InMemoryCache or the ApolloClient object to set the cache time of fields with values matching a regex? Causing them to poll at a set interval would be really ideal, but because polling is query-centric, I dont think that is possible at the field level. Giving them a specific cache time would be enough. Is there a way to hook into the InMemoryCache with a function that gets called on every read?
Another option would be to make these token strings a graphql type Token like
type Token {
id: String
}
and then in the client it might be possible to define a custom cache behavior for this type when initializing the cache like
new InMemoryCache({
typePolicies: {
Token: {
fields: {
id: {
read(cachedVal) {
if (cacheTimeElapsed){
return null
} else {
return cachedVal
}
}
}
},
},
},
But Im also unclear HOW to bust the cache using the read function. What do I return from the function to tell the cache that it is busted and needs to refetch? These docs are...challenging. If I could just call a function on every single read and do what I need to do, that would be ideal.
These fields will also be annotated in the apollo-server with #token(for other reasons), and we could potentially hook in here to somehow tell the client to cache-bust these fields. Not sure how, but it's another option.
I posted the same question on the Apollo forums and received the answer that, remarkably, they don't support setting specific cache times or invalidating the cache from the read function of the typePolicies. It is apparently on the roadmap.
A third party caching library was suggested instead: https://github.com/NerdWalletOSS/apollo-cache-policies
Looking in the "Why does this exist?" in the NerdWallet README, you can see they mention that this is a common pain point with the InMemoryCache

Amplify and AppSync not updating data on mutation from multiple sources

I have been attempting to interact with AppSync/GraphQL from:
Lambda - Create (works) Update (does not change data)
Angular - Create/Update subscription received, but object is null
Angular - Spoof update (does not change data)
AppSync Console - Spoof update (does not change data)
Post:
mutation MyMutation {
updateAsset(input: {
id: "b34d3aa3-fbc4-48b5-acba-xxxxxxxxxxx",
owner: "51b691a5-d088-4ac0-9f46-xxxxxxxxxxxx",
description: "AppSync"
}) {
id
owner
description
}
}
Response:
{
"data": {
"updateAsset": {
"id": "b34d3aa3-fbc4-48b5-acba-xxxxxxxxxx",
"owner": "51b691a5-d088-4ac0-9f46-xxxxxxxxxxx",
"description": "Edit Edit from AppSync"
}
}
The version in DynamoDB gets auto-incremented each time I send the query. But the description remains the same as originally set.
Auth Rules on Schema -
#auth(
rules: [
{ allow: public, provider: apiKey, operations: [create, update, read] },
{ allow: private, provider: userPools, operations: [read, create, update, delete] }
{ allow: groups, groups: ["admin"], operations: [read, create, update, delete] }
])
For now on the Frontend I'm cheating and just requesting the data after I received a null subscription event. But as I've stated I only seem to be able to set any of the data once and then I can't update it.
Any insight appreciated.
Update: I even decided to try a DeleteAsset statement and it won't delete but revs the version.
I guess maybe the next sane thing to do is to either stand up a new environment or attempt to stand this up in a fresh account.
Update: I have a working theory this has something to do with Conflict detection / rejection. When I try to delete via AppSync direct I get a rejection. From Angular I just get the record back with no delete.
After adding additional Auth on the API, I remember it asked about conflict resolution and I chose "AutoMerge". Doc on this at https://docs.aws.amazon.com/appsync/latest/devguide/conflict-detection-and-sync.html
After further review I'll note what happened in the hopes it helps someone else.
Created amplify add api
This walked me thru a wizard. I used the existing Cognito UserPool since I had not foreseen I would need to call this API from a S3 Trigger (Lambda Function) later.
Now needing to grant apiKey or preferably IAM access from the Lambda to AppSync/GraphQL API I performed amplify update api and added the additional Auth setting.
This asked me how I wanted to solve conflict, since more than one source can edit the data. Because I just hit "agree" on Terms and Conditions and rarely read the manual; I selected 'AutoMerge' .. sounds nice right?
So now if you read the fine print, edits made to a table will be rejected as we now have this _version (Int) that would need to get passed so AutoMerge can decide if it wants to take your change.
It also creates an extra DataStore Table in DynamoDB tracking versions. So in order to properly deal with this strategy you'd need to extend your schema to include _version not just id or whatever primary key you opted to use.
Also note: if you delete it sets _delete Bool to true. This actually still is returned to the UI so now your initial query needs to filter off (or not) deleted records.
Determined I also didn't need this. I don't want to use a Datastore (least not now) so: I found the offender in transform.conf.json within the API. After executing amplify update api, GraphQL, I chose 'Disable Datastore for entire API` and it got rid of the ConflictHandler an ConflictDetection.
This was also agitating my Angular 11 subscription to Create/Update as the added values this created broke the expected model. Not to mention the even back due to nothing changing was null.
Great information here, Mark. Thanks for the write up and updates.
I was playing around with this and with the Auto Merge conflict resolution strategy I was able to post an update using a GraphQL mutation by sending the current _version member along.
This function:
await API.graphql(
graphqlOperation(updateAsset, {
input: {
id: assetToUpdate.id,
name: "Updated name",
_version: assetToUpdate._version
}
}
));
Properly updates, contacts AppSync, and propagates the changes to DynamoDB/DataStore. Using the current version tells AppSync that we are up-to-date and able to edit the content. Then AppSync manages/increments the _version/_createdAt/etc.
Adding _version to my mutation worked very well.
API.graphql({
query: yourQuery,
variables: {
input: {
id: 'your-id',
...
_version: version,
},
},
});

How to query data from 2 APIs

I have setup a Gatsby Client which connects to Contentful using the gatsby-source-contentful plugin. I have also connected a simple custom API which is connected using the gatsby-source-graphql plugin.
When I run the dev-server I am able to query my pages from Contentful in the playground.
I am also able to query my custom API through the playground as well.
So both APIs work and are connected with Gatsby properly.
I want to programatically generate a bunch of pages that have dynamic sections (references) which an author can add and order as she wishes.
I do achieve this using the ...on Node connection together with fragments I define within each dynamic section. It all works out well so far.
My actual problem:
Now I have a dynamic section which is a Joblist. This Component requires to get data out of the Contentful API as it stores values like latitude and longitude. So the author is free to set a point on a map and set a radius. I successfully get this information out of Contentful using a fragment inside the component:
export const query = graphql `
fragment JoblistModule on ContentfulJoblisteMitAdresse {
... on ContentfulJoblisteMitAdresse {
contentful_id
radius
geo {
lon
lat
}
}
}`
But how can I pass this information in to another query that fetches the jobdata from my custom API? If I understand Gatsby correctly I somehow have to connect these two API's together? Or can I run another query somehow that fetches these values passed in as variables? How and where would I achieve this?
I could not find any approach neither inside the gatsby-node.js (since passed-in context can only be used as variables inside a query) nor in the template-file (since I can run only 1 query at a time), nor in the component itself (since this only accept staticQuery)
I don't know where my misunderstanding is. So I would very appreciate any hints, help or examples.
Since your custom API is a graphQL API, you can use delegateToSchema from the graphql-tools package to accomplish this.
You will need to create a resolver using Gatsby's setFieldsOnGraphQLNodeType API. Within this resolver, your resolve function will call delegateToSchema.
We have a similar problem, our blog posts have an "author" field which contains an ID. We then do a graphQL query to another system to look up author info by that ID.
return {
remoteAuthor: {
type: person,
args: {},
resolve: async (source: ContentfulBlogPost, fieldArgs, context, info) => {
if (!source.author) {
return null
}
// runs the selection on the remote schema
// https://github.com/gatsbyjs/gatsby/issues/14517
return delegateToSchema({
schema: authorsSchema,
operation: 'query',
fieldName: 'Person',
args: { id: source.author },
context,
info,
})
},
},
}
This adds a 'remoteAuthor' field to our blog post type, and whenever it gets queried, those selections are proxied to the remote schema where the person type exists.

How to separate logic when updating Apollo cache that used as global store?

Using Apollo cache as global store - for remote and local data, is very convenient.
However, while I've never used redux, I think that the most important thing about it is implementing flux: an event driven architecture in the front-end that separate logic and ensure separation of concerns.
I don't know how to implement that with Apollo. The doc says
When mutation modifies multiple entities, or if it creates or deletes entities, the Apollo Client cache is not automatically updated to reflect the result of the mutation. To resolve this, your call to useMutation can include an update function.
Adding an update function in one part of the application that handle all cache updates; by updating queries and/or fragments for the all other parts of the application, is exactly what we want to avoid in Flux / Event driven architecture.
To illustrate this, let me give a single simple example. Here, we have (at least 3 linked components)
1. InboxCount
Component that show the number of Inbox items in SideNav
query getInboxCount {
inbox {
id
count
}
}
2. Inbox list items
Component that displays items in Inbox page
query getInbox {
inbox {
id
items {
...ItemPreview
...ItemDetail
}
}
}
Both of those components read data from those GQL queries from auto generated hooks ie. const { data, loading } = useGetInboxItemsQuery()
3. AddItem
Component that creates a new item. Because it creates a new entity I need to manually update cache. So I am forced to write
(pseudo-code)
const [addItem, { loading }] = useCreateItemMutation({
update(cache, { data }) {
const cachedData = cache.readQuery<GetInboxItemsQuery>({
query: GetInboxItemsDocument,
})
if (cachedData?.inbox) {
// 1. Update items list GetInboxItemsQuery
const newItems = cachedData.inbox.items.concat(data.items)
cache.writeQuery({
query: GetInboxItemsDocument,
data: {
inbox: {
id: 'me',
__typename: 'Inbox',
items: newItems,
},
},
})
// 2. Update another query wrapped into another reusable method, here
setInboxCount(cache, newItems.length)
}
},
})
Here, my AddItem component must be aware of my different other queries / fragments declared in my application 😭Moreover, as it's quite verbose, complexity is increasing very fast in update method. Especially when multiple list / queries should be updated like here
Does anyone have recommendations about implementing a more independent components? Am I wrong with how I created my queries?
The unfortunate truth about update is that it trades simplicity for performance. A truly "dumb" client would only receive data from the server and render it, never manipulating it. By instructing Apollo how to modify our cache after a mutation, we're inevitably duplicating the business logic that already exists on our server. The only way to avoid this is to either:
Have the mutation return a larger section of the graph. For example, if a user creates a post, instead of returning the created post, return the complete user object, including all of the user's posts.
Refetch the affected queries.
Of course, often neither approach is particularly desirable and we opt for injecting business logic into our client apps instead.
Separating this business logic could be as simple as keeping your update functions in a separate file and importing them as needed. This way, at least you can test the update logic separately. You may also prefer a more elegant solution like utilizing a Link. apollo-link-watched-mutation is a good example of a Link that lets you separate the update logic from your components. It also solves the issue of having to keep track of query variables in order to perform those updates.

React Apollo - multiple mutations

I'm using react-apollo#2.5.6
I have a component, when you click on it, it will based on "select" state and issue either an add or a remove operation.
Currently I'm doing this to have 2 mutations function injected to my component. Is that the correct way to do it? Am I able to just use one Mutation ( HOC ) instead of multiple ?
<Mutation mutation={ADD_STUFF}>
{(addStuff) => (
<Mutation mutation={REMOVE_STUFF}>
{(removeStuff) => {
And later in the wrapped component, I will do something like that
onClick={(e) => {
e.preventDefault()
const input = {
variables: {
userId: user.id,
stuffId: stuff.id,
},
}
// Based on selected state, I will call either add or remove
if (isSelected) {
removeStuff(input)
} else {
addStuff(input)
}
}}
Thanks
Everything is possible but usually costs time and money ;) ... in this case simplicity, readability, manageablility.
1st solution
Common mutation, f.e. named 'change' with changeType parameter.
Of course that requires API change - you need a new resolver.
2nd solution
Using graphql-tag you can construct any query from the string. Take an inspiration from this answer - with 'classic graphql HOC' pattern.
This solution doesn't require API change.
I think using two different Mutation components does not make sense. If I understand correctly, there can be two ways to solve your problem.
Using Apollo client client.mutate function to do manual mutation based on the state and set mutation and variables properties based on the new state. To access the client in current component, you need to pass along the client from parent component where it was created to child components where mutation is taking place.
Using single Mutation component inside render method of your component and setting mutation and variables attributes based on the state variable.
The approach that you are using is working as you said, but to me looks like you are delegating some logic to the UI that should be handled by the underlying service based on the isSelected input.
I think that you should create a single mutation for ADD_STUFF and REMOVE_STUFF, I would create the ADD_OR_REMOVE_STUFF mutation, and choose the add or remove behavior on the resolver.
Having one mutation is easier to maintain/expand/understand, if the logic requires something else besides add/remove, for example if you have to choose add/remove/update/verify/transform, would you nest 5 mutations?
In the previous case the single mutation could be named MULTI_HANDLE_STUFF, and only have that one mutation called from the UI.

Resources