Which is the correct resolver function approach? - graphql

I would like to clarify which approach I should use for my resolver functions in Apollo + GraphQL
Let's assume the following schema:
type Post {
id: Int
text: String
upVotes: Int
}
type Author{
name: String
posts: [Post]
}
schema {
query: Author
}
The ApoloGraphql tutorial suggests a resolver map like this:
{Query:{
author(_, args) {
return author.findAll()
}
}
},
Author {
posts: (author) => author.getPosts(),
}
As far as I know, every logic regarding posts e.g. get author with posts where the count of post upVotes > args.upVotes, must be handled in the author method. That gives us the following resolver map:
{Query:{
author(_, args) {
return author.findAll({
include:[model: Post]
where: {//post upVotes > args.upVotes}
})
}
},
Author {
posts: (author) => author.getPosts(),
}
Calling author, will first select the author with posts in one joined query, where posts are greater than args.upVotes. Then it will select the posts for that author again, in an additional query because of Author ... getPosts()
Technically, I can reach the same result by removing Author, since posts are already included in the small author method.
I have the following questions:
Do I need this statement? In which cases?
Author {
posts: (author) => author.getPosts(),
}
If no, then how can I find out if the posts field was requested so that
I can make the posts include conditionally, depending not only on the
arguments, but also on the requested fields?
If yes, which posts will contain the final result? Posts from the
include statement, or the getPosts()?

The resolver map you included in your question isn't valid. I'm going to assume you meant something like this for the Author type:
Author {
posts: (author) => author.getPosts(),
}
As long as your author query always resolves to an array of objects that include a posts property, then you're right in thinking it doesn't make sense to include a customer resolver for the posts field on the Author type. In this case, your query's resolver is already populating all the necessary fields, and we don't have to do anything else.
GraphQL utilizes a default resolver that looks for properties on the parent (or root) object passed down to the resolver and uses those if they match the name of the field being resolved. So if GraphQL is resolving the posts field, and there is no resolver for posts, by default it looks at the Author object it's dealing with, and if there is a property on it by the name of posts, it resolves the field to its value.
When we provide a custom resolver, that resolver overrides the default behavior. So if your resolver was, for example:
posts: () => []
then GraphQL would always return an empty set of posts, even if the objects returned by author.findAll() included posts.
So when would you need to include the resolver for posts?
If your author resolver didn't "include" the posts, but the client requested that field. Like you said, the problem is that we're potentially making an unnecessary additional call in some cases, depending on whether your author resolver "includes" the posts or not. You can get around that by doing something like this:
posts: (author) => {
if (author.posts) return author.posts
return author.getPosts()
}
// or more succinctly
posts: author => author.posts ? author.posts : author.getPosts()
This way, we only call getPosts if we actually need to get the posts. Alternatively, you can omit the posts resolver and handle this inside your author resolver. We can look at the forth argument passed to the resolver for information about the request, including which fields were requested. For example, your resolver could look something like this:
author: (root, args, context, info) => {
const include = []
const requestedPosts = info.fieldNodes[0].selectionSet.selections.includes(s => s.name.value === 'posts'
if (requestedPosts) include.push(Post)
return Author.findAll({include})
}
Now your resolver will only include the posts for each author if the client specifically requested it. The AST tree object provided to the resolver is messy to parse, but there are libraries out there (like this one) to help with that.

Related

Is it possible to add a layer of grouping under query in graphql? [duplicate]

All docs and tutorials usually show simple examples of mutations that look like this:
extend type Mutation {
edit(postId: String): String
}
But this way the edit method has to be unique across all entities, which to me seems like not a very robust way to write things. I would like to describe mutation similar to how we describe Queries, something like this:
type PostMutation {
edit(postId: String): String
}
extend type Mutation {
post: PostMutation
}
This seems to be a valid schema (it compiles and I can see it reflected in the generated graph-i-ql docs). But I can't find a way to make resolvers work with this schema.
Is this a supported case for GraphQL?
It's possible but generally not a good idea because:
It breaks the spec. From section 6.3.1:
Because the resolution of fields other than top‐level mutation fields must always be side effect‐free and idempotent, the execution order must not affect the result, and hence the server has the freedom to execute the field entries in whatever order it deems optimal.
In other words, only fields on the mutation root type should have side effects like CRUD operations.
Having the mutations at the root makes sense conceptually. Whatever action you're doing (liking a post, verifying an email, submitting an order, etc.) doesn't rely on GraphQL having to resolve additional fields before the action is taken. This is unlike when you're actually querying data. For example, to get comments on a post, we may have to resolve a user field, then a posts field and then finally the comments field for each post. At each "level", the field's contents are dependent on the value the parent field resolved to. This normally is not the case with mutations.
Under the hood, mutations are resolved sequentially. This is contrary to normal field resolution which happens in parallel. That means, for example, the firstName and lastName of a User type are resolved at the same time. However, if your operation type is mutation, the root fields will all be resolved one at a time. So in a query like this:
mutation SomeOperationName {
createUser
editUser
deleteUser
}
Each mutation will happen one at a time, in the order that they appear in the document. However, this only works for the root and only when the operation is a mutation, so these three fields will resolve in parallel:
mutation SomeOperationName {
user {
create
edit
delete
}
}
If you still want to do it, despite the above, this is how you do it when using makeExecutableSchema, which is what Apollo uses under the hood:
const resolvers = {
Mutation: {
post: () => ({}), // return an empty object,
},
PostMutation: {
edit: () => editPost(),
},
// Other types here
}
Your schema defined PostMutation as an object type, so GraphQL is expecting that field to return an object. If you omit the resolver for post, it will return null, which means none of the resolvers for the returning type (PostMutation) will be fired. That also means, we can also write:
mutation {
post
}
which does nothing but is still a valid query. Which is yet another reason to avoid this sort of schema structure.
Absolutely disagree with Daniel!
This is an amazing approach which helps to frontenders fastly understand what operations have one or another resource/model. And do not list loooong lists of mutations.
Calling multiple mutations in one request is common antipattern. For such cases better to create one complex mutation.
But even if you need to do such operation with several mutations you may use aliases:
await graphql({
schema,
source: `
mutation {
op1: article { like(id: 1) }
op2: article { like(id: 2) }
op3: article { unlike(id: 3) }
op4: article { like(id: 4) }
}
`,
});
expect(serialResults).toEqual([
'like 1 executed with timeout 100ms',
'like 2 executed with timeout 100ms',
'unlike 3 executed with timeout 5ms',
'like 4 executed with timeout 100ms',
]);
See the following test case: https://github.com/nodkz/conf-talks/blob/master/articles/graphql/schema-design/tests/mutations-test.js
Methods like/unlike are async with timeouts and works sequentially

GraphQL: Mutation inside Query Object [duplicate]

All docs and tutorials usually show simple examples of mutations that look like this:
extend type Mutation {
edit(postId: String): String
}
But this way the edit method has to be unique across all entities, which to me seems like not a very robust way to write things. I would like to describe mutation similar to how we describe Queries, something like this:
type PostMutation {
edit(postId: String): String
}
extend type Mutation {
post: PostMutation
}
This seems to be a valid schema (it compiles and I can see it reflected in the generated graph-i-ql docs). But I can't find a way to make resolvers work with this schema.
Is this a supported case for GraphQL?
It's possible but generally not a good idea because:
It breaks the spec. From section 6.3.1:
Because the resolution of fields other than top‐level mutation fields must always be side effect‐free and idempotent, the execution order must not affect the result, and hence the server has the freedom to execute the field entries in whatever order it deems optimal.
In other words, only fields on the mutation root type should have side effects like CRUD operations.
Having the mutations at the root makes sense conceptually. Whatever action you're doing (liking a post, verifying an email, submitting an order, etc.) doesn't rely on GraphQL having to resolve additional fields before the action is taken. This is unlike when you're actually querying data. For example, to get comments on a post, we may have to resolve a user field, then a posts field and then finally the comments field for each post. At each "level", the field's contents are dependent on the value the parent field resolved to. This normally is not the case with mutations.
Under the hood, mutations are resolved sequentially. This is contrary to normal field resolution which happens in parallel. That means, for example, the firstName and lastName of a User type are resolved at the same time. However, if your operation type is mutation, the root fields will all be resolved one at a time. So in a query like this:
mutation SomeOperationName {
createUser
editUser
deleteUser
}
Each mutation will happen one at a time, in the order that they appear in the document. However, this only works for the root and only when the operation is a mutation, so these three fields will resolve in parallel:
mutation SomeOperationName {
user {
create
edit
delete
}
}
If you still want to do it, despite the above, this is how you do it when using makeExecutableSchema, which is what Apollo uses under the hood:
const resolvers = {
Mutation: {
post: () => ({}), // return an empty object,
},
PostMutation: {
edit: () => editPost(),
},
// Other types here
}
Your schema defined PostMutation as an object type, so GraphQL is expecting that field to return an object. If you omit the resolver for post, it will return null, which means none of the resolvers for the returning type (PostMutation) will be fired. That also means, we can also write:
mutation {
post
}
which does nothing but is still a valid query. Which is yet another reason to avoid this sort of schema structure.
Absolutely disagree with Daniel!
This is an amazing approach which helps to frontenders fastly understand what operations have one or another resource/model. And do not list loooong lists of mutations.
Calling multiple mutations in one request is common antipattern. For such cases better to create one complex mutation.
But even if you need to do such operation with several mutations you may use aliases:
await graphql({
schema,
source: `
mutation {
op1: article { like(id: 1) }
op2: article { like(id: 2) }
op3: article { unlike(id: 3) }
op4: article { like(id: 4) }
}
`,
});
expect(serialResults).toEqual([
'like 1 executed with timeout 100ms',
'like 2 executed with timeout 100ms',
'unlike 3 executed with timeout 5ms',
'like 4 executed with timeout 100ms',
]);
See the following test case: https://github.com/nodkz/conf-talks/blob/master/articles/graphql/schema-design/tests/mutations-test.js
Methods like/unlike are async with timeouts and works sequentially

Can a GraphQL resolver force arguments in parent to be retrieved?

If I have 2 types: User and Note with the following schema:
query {
getUser(userId: ID!): User
}
type User {
userId: ID
email: String
notes: [Note]
}
type Note {
noteId: ID
text: String
}
I am writing a resolver for User#notes. Now say notes need to be retrieved by email address, so I actually need the root object passed to the resolver to contain the email field, is there anyway I can force GraphQL to query the email field in the User object even if the user has not requested it?
In terms of code, from what I see, this is how I can write a resolver. How can I ensure obj.email is requested whenever the user requests the note field?
User: {
notes(obj, args, context, info) {
// How can I ensure obj.email is requested?
return NoteRetriever.getNotesByEmail(obj.email);
}
}
Edit
I am wondering about the case where the parent resolver doesn't resolve the email field unless explicitly requested. What if we need to make an API call to get the email for the user? So by default we don't request it. However, when the notes is requested, it makes sense to request the email too.
Is there a way for the resolver to specify dependency on parent fields - to ensure that gets requested?
The "parent" value passed to your resolver as the first parameter is exactly what was returned in the parent field's resolver (unless a Promise was returned, in which case it will be whatever the Promise resolved to). So if we have a resolver like this:
Query: {
getUser: () => {
return {
userId: 10,
email: 'user#example.com',
foobar: 42,
}
}
}
and a query like:
query {
getUser {
id
notes
}
}
What's passed to our notes resolver is the entire object we returned inside the resolver for getUser.
User: {
notes(obj, args, context, info) {
console.log(obj.userId) // 10
console.log(obj.email) // "user#example.com"
console.log(obj.foobar) // 42
}
}
The parent value will be the same, regardless of the fields requested, unless the parent field resolver's logic actually returns a different value depending on the requested fields. This means you can also pass down any number of other, arbitrary entries (like foobar above) from the parent to each child field.
EDIT:
Fields are resolved independently of one another, so there is no mechanism for declaring dependencies between fields. If the getUser resolver is looking at the requested fields and making certain API calls based on requested fields (and omitting others if those fields were not requested), then you'll need to modify that logic to account for the notes field needing the user email.
I think the expectation is that if you control the query of the parent, and expect the value in the child, you should ensure the required value is always resolved by the parent.
There is, however, a way to do what you are asking when merging schemas. This is described here https://www.apollographql.com/docs/graphql-tools/schema-stitching.
Basically would need to have a base schema that is something like
type Query {
getUser(userId: ID!): User
}
type User {
userId: ID
email: String
}
With the same resolvers as you have now, and a second schema that is something like
type Note {
noteId: ID
text: String
}
extend type User {
notes: [Note]
}
Along with something like
import { mergeSchemas } from 'graphql-tools';
const finalSchema = mergeSchemas({
schemas: [userSchema, noteSchema],
resolvers: {
User {
notes: {
fragment: '... on User { email }',
resolve: notesResolver
}
}
}
});
console.dir(await graphql(finalSchema, `query { ... }`));
Notice the fragment property defining the email field. In the link above it describes how this will force resolution of the given field on the parent when this child is resolved.

Are mutation methods required to be on the top level?

All docs and tutorials usually show simple examples of mutations that look like this:
extend type Mutation {
edit(postId: String): String
}
But this way the edit method has to be unique across all entities, which to me seems like not a very robust way to write things. I would like to describe mutation similar to how we describe Queries, something like this:
type PostMutation {
edit(postId: String): String
}
extend type Mutation {
post: PostMutation
}
This seems to be a valid schema (it compiles and I can see it reflected in the generated graph-i-ql docs). But I can't find a way to make resolvers work with this schema.
Is this a supported case for GraphQL?
It's possible but generally not a good idea because:
It breaks the spec. From section 6.3.1:
Because the resolution of fields other than top‐level mutation fields must always be side effect‐free and idempotent, the execution order must not affect the result, and hence the server has the freedom to execute the field entries in whatever order it deems optimal.
In other words, only fields on the mutation root type should have side effects like CRUD operations.
Having the mutations at the root makes sense conceptually. Whatever action you're doing (liking a post, verifying an email, submitting an order, etc.) doesn't rely on GraphQL having to resolve additional fields before the action is taken. This is unlike when you're actually querying data. For example, to get comments on a post, we may have to resolve a user field, then a posts field and then finally the comments field for each post. At each "level", the field's contents are dependent on the value the parent field resolved to. This normally is not the case with mutations.
Under the hood, mutations are resolved sequentially. This is contrary to normal field resolution which happens in parallel. That means, for example, the firstName and lastName of a User type are resolved at the same time. However, if your operation type is mutation, the root fields will all be resolved one at a time. So in a query like this:
mutation SomeOperationName {
createUser
editUser
deleteUser
}
Each mutation will happen one at a time, in the order that they appear in the document. However, this only works for the root and only when the operation is a mutation, so these three fields will resolve in parallel:
mutation SomeOperationName {
user {
create
edit
delete
}
}
If you still want to do it, despite the above, this is how you do it when using makeExecutableSchema, which is what Apollo uses under the hood:
const resolvers = {
Mutation: {
post: () => ({}), // return an empty object,
},
PostMutation: {
edit: () => editPost(),
},
// Other types here
}
Your schema defined PostMutation as an object type, so GraphQL is expecting that field to return an object. If you omit the resolver for post, it will return null, which means none of the resolvers for the returning type (PostMutation) will be fired. That also means, we can also write:
mutation {
post
}
which does nothing but is still a valid query. Which is yet another reason to avoid this sort of schema structure.
Absolutely disagree with Daniel!
This is an amazing approach which helps to frontenders fastly understand what operations have one or another resource/model. And do not list loooong lists of mutations.
Calling multiple mutations in one request is common antipattern. For such cases better to create one complex mutation.
But even if you need to do such operation with several mutations you may use aliases:
await graphql({
schema,
source: `
mutation {
op1: article { like(id: 1) }
op2: article { like(id: 2) }
op3: article { unlike(id: 3) }
op4: article { like(id: 4) }
}
`,
});
expect(serialResults).toEqual([
'like 1 executed with timeout 100ms',
'like 2 executed with timeout 100ms',
'unlike 3 executed with timeout 5ms',
'like 4 executed with timeout 100ms',
]);
See the following test case: https://github.com/nodkz/conf-talks/blob/master/articles/graphql/schema-design/tests/mutations-test.js
Methods like/unlike are async with timeouts and works sequentially

graphql resolver optimisation

If I have the schema:
type Query {
posts: [Post!]!
}
type Post {
title: String!
lotsofdata: String
}
and a resolver:
function posts(parent, args, context, info) {
return readAllPosts(/*?*/)
}
And two possible queries. Query #1:
query {
posts{
title
}
}
and query #2:
query {
posts{
title
lotsofdata
}
}
Is it possible to optimise the resolver so with query #1 readAllPosts only pulls back titles from the database but for query #2 it pulls back both titles and lotsofdata?
I've looked at the parent, args, context, and info arguments but can't see anything to indicate whether the resolver is being called in response to a query like #1 or like #2.
not sure if it is still relevant for you, but it should be possible, you can take a look at library called https://github.com/robrichard/graphql-fields#readme. It will parse info argument in your resolver function. This way you can gain information about executed fields into your resolver. The other part is to use this information to build proper SQL statement or projection or whatever (dependent on what db you use). I hope, that it helps.
Best David

Resources