React Apollo - multiple mutations - graphql

I'm using react-apollo#2.5.6
I have a component, when you click on it, it will based on "select" state and issue either an add or a remove operation.
Currently I'm doing this to have 2 mutations function injected to my component. Is that the correct way to do it? Am I able to just use one Mutation ( HOC ) instead of multiple ?
<Mutation mutation={ADD_STUFF}>
{(addStuff) => (
<Mutation mutation={REMOVE_STUFF}>
{(removeStuff) => {
And later in the wrapped component, I will do something like that
onClick={(e) => {
e.preventDefault()
const input = {
variables: {
userId: user.id,
stuffId: stuff.id,
},
}
// Based on selected state, I will call either add or remove
if (isSelected) {
removeStuff(input)
} else {
addStuff(input)
}
}}
Thanks

Everything is possible but usually costs time and money ;) ... in this case simplicity, readability, manageablility.
1st solution
Common mutation, f.e. named 'change' with changeType parameter.
Of course that requires API change - you need a new resolver.
2nd solution
Using graphql-tag you can construct any query from the string. Take an inspiration from this answer - with 'classic graphql HOC' pattern.
This solution doesn't require API change.

I think using two different Mutation components does not make sense. If I understand correctly, there can be two ways to solve your problem.
Using Apollo client client.mutate function to do manual mutation based on the state and set mutation and variables properties based on the new state. To access the client in current component, you need to pass along the client from parent component where it was created to child components where mutation is taking place.
Using single Mutation component inside render method of your component and setting mutation and variables attributes based on the state variable.

The approach that you are using is working as you said, but to me looks like you are delegating some logic to the UI that should be handled by the underlying service based on the isSelected input.
I think that you should create a single mutation for ADD_STUFF and REMOVE_STUFF, I would create the ADD_OR_REMOVE_STUFF mutation, and choose the add or remove behavior on the resolver.
Having one mutation is easier to maintain/expand/understand, if the logic requires something else besides add/remove, for example if you have to choose add/remove/update/verify/transform, would you nest 5 mutations?
In the previous case the single mutation could be named MULTI_HANDLE_STUFF, and only have that one mutation called from the UI.

Related

How to keep derived state up to date with Apollo/GraphQL?

My situation is this: I have multiple components in my view that ultimately depend on the same data, but in some cases the view state is derived from the data. How do I make sure my whole view stays in sync when the underlying data changes? I'll illustrate with an example using everyone's favorite Star Wars API.
First, I show a list of all the films, with a query like this:
# ALL_FILMS
query {
allFilms {
id
title
releaseDate
}
}
Next, I want a separate component in the UI to highlight the most recent film. There's no query for that, so I'll implement it with a client-side resolver. The query would be:
# MOST_RECENT_FILM
query {
mostRecentFilm #client {
id
title
}
}
And the resolver:
function mostRecentFilmResolver(parent, variables, context) {
return context.client.query({ query: ALL_FILMS }).then(result => {
// Omitting the implementation here since it's not relevant
return deriveMostRecentFilm(result.data);
})
}
Now, where it gets interesting is when SWAPI gets around to adding The Last Jedi and The Rise of Skywalker to its film list. We can suppose I'm polling on the list so that it gets periodically refetched. That's great, now my list UI is up to date. But my "most recent film" UI isn't aware that anything has changed — it's still stuck in 2015 showing The Force Awakens, even though the user can clearly see there are newer films.
Maybe I'm spoiled; I come from the world of MobX where stuff like this Just Works™. But this doesn't feel like an uncommon problem. Is there a best practice in the realm of Apollo/GraphQL for keeping things in sync? Am I approaching this problem in entirely the wrong way?
A few ideas I've had:
My "most recent film" query could also poll periodically. But you don't want to poll too often; after all, Star Wars films only come out every other year or so. (Thanks, Disney!) And depending on how the polling intervals overlap there will still be a big window where things are out of sync.
Instead putting the deriveMostRecentFilm logic in a resolver, just put it in the component and share the ALL_FILMS query between components. That would work, but that's basically answering "How do I get this to work in Apollo?" with "Don't use Apollo."
Some complicated system of keeping track of the dependencies between queries and chaining refreshes based on that. (I'm not keen to invent this if I can avoid it!)
In Apollo observables are (in components) over queried values (cached data 'slots') but your mostRecentFilm is not an observable, is not based on cached values (they are cached) but on one time fired query result (updated on demand).
You're only missing an 'updating connection', f.e. like this:
# ALL_FILMS
query {
allFilms {
id
title
releaseDate
isMostRecentFilm #client
}
}
Use isMostRecentFilm local resolver to update mostRecentFilm value in cache.
Any query (useQuery) related to mostRecentFilm #client will be updated automatically. All without additional queries, polling etc. - Just Works? (not tested, it should work) ;)

What does imperative mean in Apollo GraphQL?

I'm new to Apollo GraphQL. When reading its docs, it mentioned the word imperative several times. But I can't really find an explanation online that clarify what exactly "imperative" does for Apollo GraphQL. So far, the easiest def. I got is imperative programming is like how you do something, and declarative programming is more like what you do.
Here is an example of mentioning imperative for Apollo GraphQL:
GraphQL’s mutations are triggered imperatively, but that’s only
because a HOC or render prop grants access to the function which
executes the mutation (e.g. on a button click). With Apollo, the
mutations and queries become declarative over imperative.
Could someone provide a solid example to help me understand what imperative mean in Apollo GraphQL?
It's actually a simpler paragraph than you realise, in this sentence imperative is explaining how one might choose to write out a graphql react component as:
import React from 'react';
import { Query } from 'react-apollo';
const Profile = () => (
<Query query={}>
{() => <div>My Profile</div>}
</Query>
);
export default Profile;
While this component doesn't actually make a query because no query is provided we are imperatively programming it. Imperative in the sense we are providing the query into the component and the HOC triggers the query or mutation and passes it into the props. In this example we can step through the code from creating the HOC and adding a query to calling it via props on the component. Though it's interesting to note that a GraphQL query on it's own is declarative in nature.
Declarative is best characterised as describing what we would like and in the apollo client the best way to visualize this is through a functional component.
const LAST_LAUNCH = gql`
query lastLaunch {
launch {
id
timestamp
}
}
`;
export function LastLaunch() {
const { loading, data } = useQuery(LAST_LAUNCH);
return (
<div>
<h1>Last Launch</h1>
{loading ? <p>Loading</p> : <p>Timestamp: {data.launch.timestamp}</p>}
</div>
);
}
In this example you can see we are essentially executing this query / mutation using
const { loading, data } = useQuery(LAST_LAUNCH);
This line of code describes using the query written above what we would like to be returned making it a declarative statement.
In simplistic terms the HOC component in example one has several steps that you can follow before you could use your data. In the second example we are simply describing what we would like in a single statement and receiving back the data.
Finally it's also important to mention that in programming we generally have a mixture of imperative and declarative statements / blocks of code throughout our application and it's perfectly normal.

How to separate logic when updating Apollo cache that used as global store?

Using Apollo cache as global store - for remote and local data, is very convenient.
However, while I've never used redux, I think that the most important thing about it is implementing flux: an event driven architecture in the front-end that separate logic and ensure separation of concerns.
I don't know how to implement that with Apollo. The doc says
When mutation modifies multiple entities, or if it creates or deletes entities, the Apollo Client cache is not automatically updated to reflect the result of the mutation. To resolve this, your call to useMutation can include an update function.
Adding an update function in one part of the application that handle all cache updates; by updating queries and/or fragments for the all other parts of the application, is exactly what we want to avoid in Flux / Event driven architecture.
To illustrate this, let me give a single simple example. Here, we have (at least 3 linked components)
1. InboxCount
Component that show the number of Inbox items in SideNav
query getInboxCount {
inbox {
id
count
}
}
2. Inbox list items
Component that displays items in Inbox page
query getInbox {
inbox {
id
items {
...ItemPreview
...ItemDetail
}
}
}
Both of those components read data from those GQL queries from auto generated hooks ie. const { data, loading } = useGetInboxItemsQuery()
3. AddItem
Component that creates a new item. Because it creates a new entity I need to manually update cache. So I am forced to write
(pseudo-code)
const [addItem, { loading }] = useCreateItemMutation({
update(cache, { data }) {
const cachedData = cache.readQuery<GetInboxItemsQuery>({
query: GetInboxItemsDocument,
})
if (cachedData?.inbox) {
// 1. Update items list GetInboxItemsQuery
const newItems = cachedData.inbox.items.concat(data.items)
cache.writeQuery({
query: GetInboxItemsDocument,
data: {
inbox: {
id: 'me',
__typename: 'Inbox',
items: newItems,
},
},
})
// 2. Update another query wrapped into another reusable method, here
setInboxCount(cache, newItems.length)
}
},
})
Here, my AddItem component must be aware of my different other queries / fragments declared in my application 😭Moreover, as it's quite verbose, complexity is increasing very fast in update method. Especially when multiple list / queries should be updated like here
Does anyone have recommendations about implementing a more independent components? Am I wrong with how I created my queries?
The unfortunate truth about update is that it trades simplicity for performance. A truly "dumb" client would only receive data from the server and render it, never manipulating it. By instructing Apollo how to modify our cache after a mutation, we're inevitably duplicating the business logic that already exists on our server. The only way to avoid this is to either:
Have the mutation return a larger section of the graph. For example, if a user creates a post, instead of returning the created post, return the complete user object, including all of the user's posts.
Refetch the affected queries.
Of course, often neither approach is particularly desirable and we opt for injecting business logic into our client apps instead.
Separating this business logic could be as simple as keeping your update functions in a separate file and importing them as needed. This way, at least you can test the update logic separately. You may also prefer a more elegant solution like utilizing a Link. apollo-link-watched-mutation is a good example of a Link that lets you separate the update logic from your components. It also solves the issue of having to keep track of query variables in order to perform those updates.

Use Graphql variables to define fields

I am trying to do something effectively like this
`query GetAllUsers($fields: [String]) {
users {
...$fields
}
}`
Where my client (currently Apollo for react) then passes in an array of fields in the variables section. The goal is to be able to pass in an array for what fields I want back, and that be interpolated to the appropriate graphql query. This currently returns a GraphQL Syntax error at $fields (expects a { but sees $ ). Is this even possible? Am I approaching this the wrong way?
One other option I had considered was invoking a JavaScript function and passing that result to query(), where the function would do something like the following:
buildQuery(fields) {
return gql`
query {
users {
${fields}
}
}`
}
This however feels like an unecessary workaround.
Comments summary:
Non standard requirements requires workarounds ;)
You can use fragments (for predefined fieldsets) but they probably won't be freely granular (field level).
Variables are definitely not for query definition (but for variables used in query).
Daniel's suggestion: gql-query-builder
It seams that graphQL community is great and full of people working on all possible use cases ... it's enough to search for solutions or ask on SO ;)

graphql- same query with different arguments

Can the below be achieved with graph ql:
we have getusers() / getusers(id=3) / getusers(name='John). Can we use same query to accept different parameters (arguments)?
I assume you mean something like:
type Query {
getusers: [User]!
getusers(id: ID!): User
getusers(name: String!): User
}
IMHO the first thing to do is try. You should get an error saying that Query.getusers can only be defined once, which would answer your question right away.
Here's the actual spec saying that such a thing is not valid: http://facebook.github.io/graphql/June2018/#example-5e409
Quote:
Each named operation definition must be unique within a document when
referred to by its name.
Solution
From what I've seen, the most GraphQL'y way to create such an API is to define a filter input type, something like this:
input UserFilter {
ids: [ID]
names: [String]
}
and then:
type Query {
users(filter: UserFilter)
}
The resolver would check what filters were passed (if any) and query the data accordingly.
This is very simple and yet really powerful as it allows the client to query for an arbitrary number of users using an arbitrary filter. As a back-end developer you may add more options to UserFilter later on, including some pagination options and other cool things, while keeping the old API intact. And, of course, it is up to you how flexible you want this API to be.
But why is it like that?
Warning! I am assuming some things here and there, and might be wrong.
GraphQL is only a logical API layer, which is supposed to be server-agnostic. However, I believe that the original implementation was in JavaScript (citation needed). If you then consider the technical aspects of implementing a GraphQL API in JS, you might get an idea about why it is the way it is.
Each query points to a resolver function. In JS resolvers are simple functions stored inside plain objects at paths specified by the query/mutation/subscription name. As you may know, JS objects can't have more than one path with the same name. This means that you could only define a single resolver for a given query name, thus all three getusers would map to the same function Query.getusers(obj, args, ctx, info) anyway.
So even if GraphQL allowed for fields with the same name, the resolver would have to explicitly check for whatever arguments were passed, i.e. if (args.id) { ... } else if (args.name) { ... }, etc., thus partially defeating the point of having separate endpoints. On the other hand, there is an overall better (particularly from the client's perspective) way to define such an API, as demonstrated above.
Final note
GraphQL is conceptually different from REST, so it doesn't make sense to think in terms of three endpoints (/users, /users/:id and /users/:name), which is what I guess you were doing. A paradigm shift is required in order to unveil the full potential of the language.
a request of the type works:
Query {
first:getusers(),
second:getusers(id=3)
third:getusers(name='John)
}

Resources