Abstract relay-style connections in Apollo client - graphql

I am building an app that will use GraphQL on the backend and Apollo-client on the front-end. I am going to use Relay-style connection types as it would allow us to put metadata on relationships.
However, we don't want our react components to have to deal with the additional complexity added by connections. For legacy reasons and also because it seems cleaner, I would prefer that my react components don't have to deal with nodes and edges. I prefer to pass around:
Snippet 1:
const ticket = {
title: 'My bug'
authors: [{ login: 'user1', login: 'user2' }]
}
rather than
Snippet 2:
const ticket = {
title: 'My bug'
authors: {
nodes: [{
login: 'user1',
login: 'user2',
}]
}
}
Also in typescript, I really don't see myself defining a ticket type that would contain nodes and metadata such as nextPage, lastPage etc...
I am trying to come up with an abstraction, maybe at the apollo client level that would allow me to automatically convert Snippet 2 to Snippet 1 while still allowing access to Snippet 1 when I actually need those metadata.
Has this problem been solved by someone else? Do you have suggestions on a possible solution? Am i heading in the wrong directions?

Rather than trying to solve this client-side, you can simply expose additional fields in your schema. You can see this done with the official SWAPI example:
query {
allFilms {
# edges
edges {
node {
...FilmFields
}
}
# nodes exposed directly
films {
...FilmFields
}
}
}
This way you can query the nodes with or without the connection as needed without having to complicate things on the client side.

Related

GraphQL vs Normalized Data Structure Advantages

From Redux docs:
This [normalized] state structure is much flatter overall. Compared to
the original nested format, this is an improvement in several ways...
From https://github.com/paularmstrong/normalizr
:
Many APIs, public or not, return JSON data that has deeply nested objects. Using data in this kind of structure is often very difficult for JavaScript applications, especially those using Flux or Redux.
Seems like normalized database-ish data structures are better to work with on front end. Then why GraphQL is so popular if it's whole language style is revolved around quickly getting any nested data? Why do people use it then?
This kind of discussion is off-topic on SO ...
it's not only about [normalized] structures ...
graphql client (like apollo) takes care of all data fetching related nuances (error handling, cache, refetching, data conversion, and many more) also but hardly doable with redux.
Different use cases, you can use both:
keep (complex) app state in redux,
handle data fetching in apollo (you can use it for local state, too).
Let's look at why we want to normalize the cache and what kind of work we have to do to get a normalized cache.
For the main page we fetch a list of TODOs and a list of high priority TODOS. Our two endpoints return the following data:
{
all: [{ id: 1, title: "TODO 1" }, { id: 2, title: "TODO 2" }, { id: 2, title: "TODO 2"}],
highPrio: [{ id: 1, title: "TODO 1" }]
}
If we would store the data like this into our cache, we have a difficult time updating a single todo, because we have to update the todo in every array we have in our store or might have in our store in the future.
We can normalize the data and only store references in the array. This way we can easily update a single todo in a single place:
{
queries: {
all: [{ ref: "Todo:1" }, { ref: "Todo:2" }, { ref: "Todo:2" }],
highPrio: [{ ref: "Todo:1" }}]
},
refs: {
"Todo:1": { id: 1, title: "TODO 1" },
"Todo:2": { id: 2, title: "TODO 2" },
"Todo:3": { id: 3, title: "TODO 3" }
}
}
The downside is, that this shape of data is now much harder to use in our list component. We will have to transform the cache a lot, roughtly like so:
function denormalise(cache) {
return {
all: cache.queries.all.map(({ ref }) => cache.ref[ref]),
highPrio: cache.queries.highPrio.map(({ ref }) => cache.ref[ref]),
};
}
Notice how now updating Todo:1 inside of the cache will update all queries that reference the todo automatically, if we run this function inside of the React component (this is often called a selector in Redux).
The magical thing about GraphQL is that it is a strict specification with a type system. This allows GraphQL clients like Apollo to globally identify objects and normalise that cache. At the same time it can also automatically denormalise the cache for you and update objects in the cache automatically after a mutation. This means that most of the time you have to write no caching logic at all. And this should explain why it is so popular: The best code is no code!
const { data, loading, error } = useQuery(gql`
{ all { id title } highPrio { id title }
`);
This code automatically fetches the query on load, normalizes the response and writes it into the cache. Then denormalizes the cache back into the shape of the query using the cache data. Updates to elements in the cache automatically update all subscribed components.

how to get the Graphql request body in apollo-server [duplicate]

I have written a GraphQL query which like the one below:
{
posts {
author {
comments
}
comments
}
}
I want to know how can I get the details about the requested child fields inside the posts resolver.
I want to do it to avoid nested calls of resolvers. I am using ApolloServer's DataSource API.
I can change the API server to get all the data at once.
I am using ApolloServer 2.0 and any other ways of avoiding nested calls are also welcome.
You'll need to parse the info object that's passed to the resolver as its fourth parameter. This is the type for the object:
type GraphQLResolveInfo = {
fieldName: string,
fieldNodes: Array<Field>,
returnType: GraphQLOutputType,
parentType: GraphQLCompositeType,
schema: GraphQLSchema,
fragments: { [fragmentName: string]: FragmentDefinition },
rootValue: any,
operation: OperationDefinition,
variableValues: { [variableName: string]: any },
}
You could transverse the AST of the field yourself, but you're probably better off using an existing library. I'd recommend graphql-parse-resolve-info. There's a number of other libraries out there, but graphql-parse-resolve-info is a pretty complete solution and is actually used under the hood by postgraphile. Example usage:
posts: (parent, args, context, info) => {
const parsedResolveInfo = parseResolveInfo(info)
console.log(parsedResolveInfo)
}
This will log an object along these lines:
{
alias: 'posts',
name: 'posts',
args: {},
fieldsByTypeName: {
Post: {
author: {
alias: 'author',
name: 'author',
args: {},
fieldsByTypeName: ...
}
comments: {
alias: 'comments',
name: 'comments',
args: {},
fieldsByTypeName: ...
}
}
}
}
You can walk through the resulting object and construct your SQL query (or set of API requests, or whatever) accordingly.
Here, are couple main points that you can use to optimize your queries for performance.
In your example there would be great help to use
https://github.com/facebook/dataloader. If you load comments in your
resolvers through data loader you will ensure that these are called
just once. This will reduce the number of calls to database
significantly as in your query is demonstrated N+1 problem.
I am not sure what exact information you need to obtain in posts
ahead of time, but if you know the post ids you can consider to do a
"look ahead" by passing already known ids into comments. This will
ensure that you do not need to wait for posts and you will avoid
graphql tree calls and you can do resolution of comments without
waiting for posts. This is great article for optimizing GraphQL
waterfall requests and might you give good idea how to optimize your
queries with data loader and do look ahead
https://blog.apollographql.com/optimizing-your-graphql-request-waterfalls-7c3f3360b051

How could I structure my graphql schema to allow for the retrieval of possible dropdown values?

I'm trying to get the possible values for multiple dropdown menus from my graphQL api.
for example, say I have a schema like so:
type Employee {
id: ID!
name: String!
jobRole: Lookup!
address: Address!
}
type Address {
street: String!
line2: String
city: String!
state: Lookup!
country: Lookup!
zip: String!
}
type Lookup {
id: ID!
value: String!
}
jobRole, city and state are all fields that have a predetermined list of values that are needed in various dropdowns in forms around the app.
What would be the best practice in the schema design for this case? I'm considering the following option:
query {
lookups {
jobRoles {
id
value
}
}
}
This has the advantage of being data driven so I can update my job roles without having to update my schema, but I can see this becoming cumbersome. I've only added a few of our business objects, and already have about 25 different types of lookups in my schema and as I add more data into the API I'll need to somehow to maintain the right lookups being used for the right fields, dealing with general lookups that are used in multiple places vs ultra specific lookups that will only ever apply to one field, etc.
Has anyone else come across a similar issue and is there a good design pattern to handle this?
And for the record I don't want to use enums with introspection for 2 reasons.
With the number of lookups we have in our existing data there will be a need for very frequent schema updates
With an enum you only get one value, I need a code that will be used as the primary key in the DB and a descriptive value that will be displayed in the UI.
//bad
enum jobRole {
MANAGER
ENGINEER
SALES
}
//needed
[
{
id: 1,
value: "Manager"
},
{
id: 2,
value: "Engineer"
},
{
id: 3,
value: "Sales"
}
]
EDIT
I wanted to give another example of why enums probably aren't going to work. We have a lot of descriptions that should show up in a drop down that contain special characters.
// Client Type
[
{
id: 'ENDOW',
value: 'Foundation/Endowment'
},
{
id: 'PUBLIC',
value: 'Public (Government)'
},
{
id: 'MULTI',
value: 'Union/Multi-Employer'
}
]
There are others that are worse, they have <, >, %, etc. And some of them are complete sentences so the restrictive naming of enums really isn't going to work for this case. I'm leaning towards just making a bunch of lookup queries and treating each lookup as a distinct business object
I found a way to make enums work the way I needed. I can get the value by putting it in the description
Here's my gql schema definition
enum ClientType {
"""
Public (Government)
"""
PUBLIC
"""
Union/Multi-Employer
"""
MULTI
"""
Foundation/Endowment
"""
ENDOW
}
When I retrieve it with an introspection query like so
{
__type(name: "ClientType") {
enumValues {
name
description
}
}
}
I get my data in the exact structure I was looking for!
{
"data": {
"__type": {
"enumValues": [{
"name": "PUBLIC",
"description": "Public (Government)"
}, {
"name": "MULTI",
"description": "Union/Multi-Employer"
}, {
"name": "ENDOW",
"description": "Foundation/Endowment"
}]
}
}
}
Which has exactly what I need. I can use all the special characters, numbers, etc. found in our descriptions. If anyone is wondering how I keep my schema in sync with our database, I have a simple code generating script that queries the tables that store this info and generates an enums.ts file that exports all these enums. Whenever the data is updated (which doesn't happen that often) I just re-run the code generator and publish the schema changes to production.
You can still use enums for this if you want.
Introspection queries can be used client-side just like any other query. Depending on what implementation/framework you're using server-side, you may have to explicitly enable introspection in production. Your client can query the possible enum values when your app loads -- regardless of how many times the schema changes, the client will always have the correct enum values to display.
Enum values are not limited to all caps, although they cannot contain spaces. So you can have Engineer but not Human Resources. That said, if you substitute underscores for spaces, you can just transform the value client-side.
I can't speak to non-JavaScript implementations, but GraphQL.js supports assigning a value property for each enum value. This property is only used internally. For example, if you receive the enum as an argument, you'll get 2 instead of Engineer. Likewise, you would return 2 instead of Engineer inside a resolver. You can see how this is done with Apollo Server here.

Resolve to the same object from two incoherent sources in graphql

I have a problem I don't know how to solve properly.
I'm working on a project where we use a graphql server to communicate with different apis. These apis are old and very difficult to update so we decided to use graphql to simplify our communications.
For now, two apis allow me to get user data. I know it's not coherent but sadly I can't change anything to that and I need to use the two of them for different actions. So for the sake of simplicity, I would like to abstract this from my front app, so it only asks for user data, always on the same format, no matter from which api this data comes from.
With only one api, the resolver system of graphql helped a lot. But when I access user data from a second api, I find very difficult to always send back the same object to my front page. The two apis, even though they have mostly the same data, have a different response format. So in my resolvers, according to where the data is coming from, I should do one thing or another.
Example :
API A
type User {
id: string,
communication: Communication
}
type Communication {
mail: string,
}
API B
type User {
id: string,
mail: string,
}
I've heard a bit about apollo-federation but I can't put a graphql server in front of every api of our system, so I'm kind of lost on how I can achieve transparency for my front app when data are coming from two different sources.
If anyone has already encounter the same problem or have advice on something I can do, I'm all hear :)
You need to decide what "shape" of the User type makes sense for your client app, regardless of what's being returned by the REST APIs. For this example, let's say we go with:
type User {
id: String
mail: String
}
Additionally, for the sake of this example, let's assume we have a getUser field that returns a single user. Any arguments are irrelevant to the scenario, so I'm omitting them here.
type Query {
getUser: User
}
Assuming I don't know which API to query for the user, our resolver for getUser might look something like this:
async () => {
const [userFromA, userFromB] = await Promise.all([
fetchUserFromA(),
fetchUserFromB(),
])
// transform response
if (userFromA) {
const { id, communication: { mail } } = userFromA
return {
id,
mail,
}
}
// response from B is already in the correct "shape", so just return it
if (userFromB) {
return userFromB
}
}
Alternatively, we can utilize individual field resolvers to achieve the same effect. For example:
const resolvers = {
Query: {
getUser: async () => {
const [userFromA, userFromB] = await Promise.all([
fetchUserFromA(),
fetchUserFromB(),
])
return userFromA || userFromB
},
},
User: {
mail: (user) => {
if (user.communication) {
return user.communication.mail
}
return user.mail
}
},
}
Note that you don't have to match your schema to either response from your existing REST endpoints. For example, maybe you'd like to return a User like this:
type User {
id: String
details: UserDetails
}
type UserDetails {
email: String
}
In this case, you'd just transform the response from either API to fit your schema.

After a mutation, how do I update the affected data across views? [duplicate]

This question already has an answer here:
Auto-update of apollo client cache after mutation not affecting existing queries
(1 answer)
Closed 3 years ago.
I have both the getMovies query and addMovie mutation working. When addMovie happens though, I'm wondering how to best update the list of movies in "Edit Movies" and "My Profile" to reflect the changes. I just need a general/high-level overview, or even just the name of a concept if it's simple, on how to make this happen.
My initial thought was just to hold all of the movies in my Redux store. When the mutation finishes, it should return the newly added movie, which I can concatenate to the movies of my store.
After "Add Movie", it would pop back to the "Edit Movies" screen where you should be able to see the newly added movie, then if you go back to "My Profile", it'd be there too.
Is there a better way to do this than holding it all in my own Redux store? Is there any Apollo magic I don't know about that could possibly handle this update for me?
EDIT: I discovered the idea of updateQueries: http://dev.apollodata.com/react/cache-updates.html#updateQueries I think this is what I want (please let me know if this is not the right approach). This seems better than the traditional way of using my own Redux store.
// this represents the 3rd screen in my picture
const AddMovieWithData = compose(
graphql(searchMovies, {
props: ({ mutate }) => ({
search: (query) => mutate({ variables: { query } }),
}),
}),
graphql(addMovie, {
props: ({ mutate }) => ({
addMovie: (user_id, movieId) => mutate({
variables: { user_id, movieId },
updateQueries: {
getMovies: (prev, { mutationResult }) => {
// my mutation returns just the newly added movie
const newMovie = mutationResult.data.addMovie;
return update(prev, {
getMovies: {
$unshift: [newMovie],
},
});
},
},
}),
}),
})
)(AddMovie);
After addMovie mutation, this properly updates the view in "My Profile" because it uses the getMovies query (woah)! I'm then passing these movies as props into "Edit Movies", so how do I update it there as well? Should I just have them both use the getMovies query? Is there a way to pull the new result of getMovies out of the store, so I can reuse it on "Edit Movies" without doing the query again?
EDIT2: Wrapping MyProfile and EditMovies both with getMovies query container seems to work fine. After addMovie, it's updated in both places due to updateQueries on getMovies. It's fast too. I think it's being cached?
It all works, so I guess this just becomes a question of: Was this the best approach?
The answer to the question in the title is
Use updateQueries to "inform` the queries that drive the other views that the data has changed (as you discovered).
This topic gets ongoing discussion in the react-apollo slack channel, and this answer is the consensus that I'm aware of: there's no obvious alternative.
Note that you can update more than one query (that's why the name is plural, and the argument is an object containing keys that match the name of all the queries that need updating).
As you may guess, this "pattern" does mean that you need to be careful in designing and using queries to make life easy and maintainable in designing mutations. More common queires means less chance that you miss one in a mutation updateQueries action.
The Apollo Client only updates the store on update mutations. So when you use create or delete mutations you need to tell Apollo Client how to update. I had expected the store to update automatically but it doesn’t…
I have founded a workaround with resetStore just after doing your mutation.
You reset the store just after doing the mutation. Then when you will need to query, the store is empty, so apollo refetch fresh data.
here is the code:
import { withApollo } from 'react-apollo'
...
deleteCar = async id => {
await this.props.deleteCar({
variables: { where: {
id: id
} },
})
this.props.client.resetStore().then(data=> {
this.props.history.push('/cars')
})
}
...
export default compose(
graphql(POST_QUERY, {
name: 'carQuery',
options: props => ({
fetchPolicy: 'network-only',
variables: {
where: {
id: props.match.params.id,
}
},
}),
}),
graphql(DELETE_MUTATION, {
name: 'deleteCar',
}),
withRouter,
withApollo
)(DetailPage)
The full code is here: https://github.com/alan345/naperg
Ther error before the hack resetStore

Resources