graphql-tools difference between mergeSchemas and makeExecutableSchema - graphql

So the reason I am asking this question is because I can get both of these to return a working result with just replacing one or the other. So which is the right one to use and why?
What are their purposes in regards to schemas?
import { mergeSchemas } from 'graphql-tools'
import bookSchema from './book/schema/book.gql'
import bookResolver from './book/resolvers/book'
export const schema = mergeSchemas({
schemas: [bookSchema],
resolvers: [bookResolver]
})
import { makeExecutableSchema } from 'graphql-tools'
import bookSchema from './book/schema/book.gql'
import bookResolver from './book/resolvers/book'
export const schema = makeExecutableSchema({
typeDefs: [bookSchema],
resolvers: [bookResolver]
})
Both of these examples work and return the desired outcome. I believe the correct one to use here is the makeExecutableSchema but not sure why the first one would work?
EDIT
Just incase it would be nice to have the types/resolvers:
typeDefs
type Query {
book(id: String!): Book
bookList: [Book]
}
type Book {
id: String
name: String
genre: String
}
Resolvers
export default {
Query: {
book: () => {
return {
id: `1`,
name: `name`,
genre: `scary`
}
},
bookList: () => {
return [
{ id: `1`, name: `name`, genre: `scary` },
{ id: `2`, name: `name`, genre: `scary` }
]
}
}
}
Query Ran
query {
bookList{
id
name
genre
}
}
Result
{
"data": {
"bookList": [
{
"id": "1",
"name": "name",
"genre": "scary"
},
{
"id": "2",
"name": "name",
"genre": "scary"
}
]
}
}

mergeSchemas is primarily intended to be used for schema stitching, not combing code for a single schema you've chosen to split up for organizational purposes.
Schema stitching is most commonly done when you have multiple microservices that each expose a GraphQL endpoint. You can extract schemas from each endpoint and then use mergeSchemas to create a single GraphQL service that delegates queries to each microservice as appropriate. Technically, schema stitching could also be used to extend some existing API or to create multiple services from a base schema, although I imagine those use cases are less common.
If you are architecting a single, contained GraphQL service you should stick with makeExecutableSchema. makeExecutableSchema is what actually lets you use Schema Definition Language to generate your schema. mergeSchemas is a relatively new API and has a number of open issues, especially with regards to how directives are handled. If you don't need the functionality provided by mergeSchemas -- namely, you're not actually merging separate schemas, don't use it.

Yes makeExecutableSchema creates a GraphQL.js GraphQLSchema instance from GraphQL schema language as per graphql-tools docs So if you are creating stand alone, contained GrpaphQL service is a way to go.
But if you are looking to consolidate multiple GraphQL services there are multiple different strategies you may consider such as schema-stitching, schema-merging from graphql-tools or federation from apollo (there are probably more).
Since I landed here while searching what is the difference between stitching and merging I wanted to point out that they are not one and the same thing. Here is the answer I got for this question on graphql-tools github.

Schema Stitching creates a proxy schema on top of different independent subschemas, so the parts of that schema are executed using GraphQLJS internally. This is useful to create an architecture like microservices.
Schema Merging creates a new schema by merging the extracted type definitions and resolvers from them, so there will be a single execution layer.
The first one keeps the individual schemas, but the second one won't. A use case for the first would be for combining multiple remote GraphQL APIs (microservices), while the second one would be good for combining local schemas.

Related

Typed document node with vue3 apollo and complex where

I use this package https://github.com/dotansimha/graphql-typed-document-node and I usually call it like this useQuery(peopleDocument, variables).
But laravel lighthouse has a complex where plugin which automatically adds all types for various queries where conditions for example
{ people(where: { column: AGE, operator: EQ, value: 42 }) { name } }
I would like to allow users to build their own filters with their own operators but how I can define such query when filters and their operators are dynamic?
#xadm vague explanation somehow helped. So I'll answer my own question.
I'm unsure if it's how to should be used but it does work.
Lighthouse generates operators and columns enums automatically with #whereConditions directive. So you have to run yarn graphql-codegen again to fetch them.
And then simply import then simply use them in component
Lighthouse schema definition example:
type Query {
contact(where: _ #whereConditions(columns: ["id", "email", "mobile"])): Contact #first
}
Query definition:
query contact($where: QueryContactWhereWhereConditions) {
contact(where: $where) {
id
email
mobile
}
}
Vue component:
import { ContactDocument, QueryContactWhereColumn, SqlOperator } from 'src/typed-document-nodes.ts';
useQuery(ContactDocument, {where: {column: QueryContactWhereColumn.Email, operator: SqlOperator.Like, value: '%#example.com'}})

Apollo Client 3: How to implement caching on client side for graphql interfaces?

I have a case where I have an interface, which has different type implementations defined in graphql. I may not be able to share the exact code. But the case looks something like:
interface Character {
name: String!
}
type Human implements Character {
name: String!
friends: [Character]
}
type Droid implements Character {
name: String!
material: String
}
There is query which returns either Human or Droid type in response.
Response may contain something like:
{
name: 'Human_01',
friends: []
__typename: 'Human'
}
or
{
name: 'Droid_01',
material: 'Aluminium'
__typename: 'Droid'
}
I am using Apollo Client 3 on client side for querying the data and have fragments for these like:
fragment Human on Human {
friends
}
fragment Droid on Droid {
material
}
fragment Character on Character {
name
...Human
...Droid
}
I am querying for the Character data as:
character {
...Character
}
Since, this is the case of interface, and as defined in the docs for Apollo client 3, we need to use possibleTypes in order to match the fragments in such cases. For caching purpose, I have defined InMemoryCache as:
new InMemoryCache({ possibleTypes: { Character: ['Human', 'Droid'] } })
The primary key field for a Character implementation is the name field, which I need to use in order to store its value in cache.
In Apollo client 3, it is mentioned to use typePolicies for defining keyFields for a type.
So, I need to ask as to whether I should define, type policy for both type implementations, specifying keyFields as name in both cases like:
new InMemoryCache({
possibleTypes: { Character: ['Human', 'Droid'] },
typePolicies: { Human: { keyFields: ['name'] }, Droid: { keyFields: ['name'] } }
});
In my example, I have provided only 2 such type implementations but there can be n number of type implementations corresponding to Character interface. So, in that case I will need to define keyFields as name in typePolicies for all the n type implementations.
So, does there exist any better way of implementing caching wrt these types of interface implementations ?
Any help would really be appreciated. Thanks!!!
Inheritance of type and field policies is coming in the next minor version of #apollo/client, v3.3!
You can try it out now by installing #apollo/client#3.3.0-beta.5.
To stay up to date on the progress of the v3.3 release, see this pull request.

how to get the Graphql request body in apollo-server [duplicate]

I have written a GraphQL query which like the one below:
{
posts {
author {
comments
}
comments
}
}
I want to know how can I get the details about the requested child fields inside the posts resolver.
I want to do it to avoid nested calls of resolvers. I am using ApolloServer's DataSource API.
I can change the API server to get all the data at once.
I am using ApolloServer 2.0 and any other ways of avoiding nested calls are also welcome.
You'll need to parse the info object that's passed to the resolver as its fourth parameter. This is the type for the object:
type GraphQLResolveInfo = {
fieldName: string,
fieldNodes: Array<Field>,
returnType: GraphQLOutputType,
parentType: GraphQLCompositeType,
schema: GraphQLSchema,
fragments: { [fragmentName: string]: FragmentDefinition },
rootValue: any,
operation: OperationDefinition,
variableValues: { [variableName: string]: any },
}
You could transverse the AST of the field yourself, but you're probably better off using an existing library. I'd recommend graphql-parse-resolve-info. There's a number of other libraries out there, but graphql-parse-resolve-info is a pretty complete solution and is actually used under the hood by postgraphile. Example usage:
posts: (parent, args, context, info) => {
const parsedResolveInfo = parseResolveInfo(info)
console.log(parsedResolveInfo)
}
This will log an object along these lines:
{
alias: 'posts',
name: 'posts',
args: {},
fieldsByTypeName: {
Post: {
author: {
alias: 'author',
name: 'author',
args: {},
fieldsByTypeName: ...
}
comments: {
alias: 'comments',
name: 'comments',
args: {},
fieldsByTypeName: ...
}
}
}
}
You can walk through the resulting object and construct your SQL query (or set of API requests, or whatever) accordingly.
Here, are couple main points that you can use to optimize your queries for performance.
In your example there would be great help to use
https://github.com/facebook/dataloader. If you load comments in your
resolvers through data loader you will ensure that these are called
just once. This will reduce the number of calls to database
significantly as in your query is demonstrated N+1 problem.
I am not sure what exact information you need to obtain in posts
ahead of time, but if you know the post ids you can consider to do a
"look ahead" by passing already known ids into comments. This will
ensure that you do not need to wait for posts and you will avoid
graphql tree calls and you can do resolution of comments without
waiting for posts. This is great article for optimizing GraphQL
waterfall requests and might you give good idea how to optimize your
queries with data loader and do look ahead
https://blog.apollographql.com/optimizing-your-graphql-request-waterfalls-7c3f3360b051

How could I structure my graphql schema to allow for the retrieval of possible dropdown values?

I'm trying to get the possible values for multiple dropdown menus from my graphQL api.
for example, say I have a schema like so:
type Employee {
id: ID!
name: String!
jobRole: Lookup!
address: Address!
}
type Address {
street: String!
line2: String
city: String!
state: Lookup!
country: Lookup!
zip: String!
}
type Lookup {
id: ID!
value: String!
}
jobRole, city and state are all fields that have a predetermined list of values that are needed in various dropdowns in forms around the app.
What would be the best practice in the schema design for this case? I'm considering the following option:
query {
lookups {
jobRoles {
id
value
}
}
}
This has the advantage of being data driven so I can update my job roles without having to update my schema, but I can see this becoming cumbersome. I've only added a few of our business objects, and already have about 25 different types of lookups in my schema and as I add more data into the API I'll need to somehow to maintain the right lookups being used for the right fields, dealing with general lookups that are used in multiple places vs ultra specific lookups that will only ever apply to one field, etc.
Has anyone else come across a similar issue and is there a good design pattern to handle this?
And for the record I don't want to use enums with introspection for 2 reasons.
With the number of lookups we have in our existing data there will be a need for very frequent schema updates
With an enum you only get one value, I need a code that will be used as the primary key in the DB and a descriptive value that will be displayed in the UI.
//bad
enum jobRole {
MANAGER
ENGINEER
SALES
}
//needed
[
{
id: 1,
value: "Manager"
},
{
id: 2,
value: "Engineer"
},
{
id: 3,
value: "Sales"
}
]
EDIT
I wanted to give another example of why enums probably aren't going to work. We have a lot of descriptions that should show up in a drop down that contain special characters.
// Client Type
[
{
id: 'ENDOW',
value: 'Foundation/Endowment'
},
{
id: 'PUBLIC',
value: 'Public (Government)'
},
{
id: 'MULTI',
value: 'Union/Multi-Employer'
}
]
There are others that are worse, they have <, >, %, etc. And some of them are complete sentences so the restrictive naming of enums really isn't going to work for this case. I'm leaning towards just making a bunch of lookup queries and treating each lookup as a distinct business object
I found a way to make enums work the way I needed. I can get the value by putting it in the description
Here's my gql schema definition
enum ClientType {
"""
Public (Government)
"""
PUBLIC
"""
Union/Multi-Employer
"""
MULTI
"""
Foundation/Endowment
"""
ENDOW
}
When I retrieve it with an introspection query like so
{
__type(name: "ClientType") {
enumValues {
name
description
}
}
}
I get my data in the exact structure I was looking for!
{
"data": {
"__type": {
"enumValues": [{
"name": "PUBLIC",
"description": "Public (Government)"
}, {
"name": "MULTI",
"description": "Union/Multi-Employer"
}, {
"name": "ENDOW",
"description": "Foundation/Endowment"
}]
}
}
}
Which has exactly what I need. I can use all the special characters, numbers, etc. found in our descriptions. If anyone is wondering how I keep my schema in sync with our database, I have a simple code generating script that queries the tables that store this info and generates an enums.ts file that exports all these enums. Whenever the data is updated (which doesn't happen that often) I just re-run the code generator and publish the schema changes to production.
You can still use enums for this if you want.
Introspection queries can be used client-side just like any other query. Depending on what implementation/framework you're using server-side, you may have to explicitly enable introspection in production. Your client can query the possible enum values when your app loads -- regardless of how many times the schema changes, the client will always have the correct enum values to display.
Enum values are not limited to all caps, although they cannot contain spaces. So you can have Engineer but not Human Resources. That said, if you substitute underscores for spaces, you can just transform the value client-side.
I can't speak to non-JavaScript implementations, but GraphQL.js supports assigning a value property for each enum value. This property is only used internally. For example, if you receive the enum as an argument, you'll get 2 instead of Engineer. Likewise, you would return 2 instead of Engineer inside a resolver. You can see how this is done with Apollo Server here.

can some one explain this code to me

Good day im newbie here and im tackling graphql and im having some problem on mutation can someone explain this block of code for me thank you
RootMutation: {
createAuthor: (root, args) => { return Author.create(args); },
createPost: (root, { authorId, tags, title, text }) => {
return Author.findOne({ where: { id: authorId } }).then( (author) => {
console.log('found', author);
return author.createPost( { tags: tags.join(','), title, text });
});
},
},
Sure, this is an example of two mutations in a GraphQL server. We can break it down to understand what is going on.
First let's look at the type system. A GraphQL schema normally has two root fields query and mutation (and sometimes subscription). These root fields are the root of your data hierarchy and expose the queries (GET requests) and mutations (POST, PUT, DELETE, etc requests) that you have access to.
By the looks of it you are implementing a schema with a root mutation type that looks like this:
type Mutation {
createAuthor: Author
createPost: Post
}
A type in GraphQL is made up of a set of fields each of which can have an associated resolver. Resolvers in GraphQL are like the event handlers you would attach to endpoints in REST.
The code that you have above is defining two resolvers that will handle the logic associated with the createAuthor and createPost mutations. I.E. the code in the createPost resolver is what will be run when I issue a query like this:
mutation CreatePost($post: CreatePostInput!) {
createPost(input: $post) {
id
title
tags
text
}
}
The GraphQL runtime parses the query and routes the operation to the correct resolver depending on the content of the query. In this example, it would see that I am calling the createPost mutation and would make sure to call the createPost resolver which in your case looks like this:
createPost: (root, { authorId, tags, title, text }) => {
return Author.findOne({ where: { id: authorId } }).then( (author) => {
console.log('found', author);
return author.createPost( { tags: tags.join(','), title, text });
});
},
To understand how a resolver works, let's look at the GraphQLFieldResovler type definition from graphql-js
export type GraphQLFieldResolver<TSource, TContext> = (
source: TSource,
args: { [argName: string]: any },
context: TContext,
info: GraphQLResolveInfo
) => mixed;
As you can see a GraphQLFieldResolver is a function that takes 4 arguments.
source: The source is the parent object of the current field. For example if you were defining a resolver for a field fullName on the User type, the source would be the full user object.
args: The args are any input arguments for that resolver. In my query above it would contain the value of the $post variable.
context: Context is a global context for a GraphQL execution. This is useful for passing information around that a resolver might need. For example, you include a database connection that you can use from your resolvers without importing it in every file.
info: The info object contains information about your GraphQL schema, the query, and other information such as the path to the current resolver being executed. This is useful in many ways. Here is one post talking about how you can use it to precompute queries: (https://scaphold.io/community/blog/querying-relational-data-with-graphql/)
This idea of having types and field resolvers is part of what makes GraphQL so powerful. Once you've defined you type system and the resolvers for their fields you can structure your schema however you want and GraphQL will always make sure to call the correct resolver no matter how deeply nested a query might be.
I hope this helps :)

Resources