Apollo: Extending type from remote schema - graphql

I currently have multiple GraphQL services running Apollo and have created a "Gateway" service that uses remote schema stitching in order to give me a single endpoint for access.
Within my Gateway service I am looking to extend the remote types to create references between the stitched schemas.
const linkTypeDefs = `
extend type User {
profile: Profile
}
extend type Profile {
user: User
}`;
const schema = mergeSchemas({
schemas: [userSchema, profileSchema, linkTypeDefs],
resolvers: /* Resolvers */
});
However I seem to be getting the following error:
GraphQLError: Cannot extend type "User" because it does not exist in the existing schema.
I have double checked and the type "User" and "Profile" exist and I can query them from the Gateway Graphiql.
Are there any particular steps I need to take in order to extend types merged from a remote schema?

I eventually resolved this by realising that userSchema and profileSchema were both returning a promise.
I awaited these return values and that resolved the issue for me.

Related

Conceptual Question: Shared GraphQL schema for multiple endpoints (client/admin)

Context
I am using a NX Workspace to organize two different angular frontends (client & admin). To separate client and admin logic, two different NestJS backend services including GraphQL are used for client and admin.
As both services fetch data from a single MongoDB a single database library is used for both frontends.
Both backend services currently use a single GraphQL Schema generated through schema-first approach and a single database layer. In most cases the types and fields definition matches between client and admin, but in some cases a single service requires additional query arguments or fields.
For example, the admin service depends on the fields confirmed or banned of type User while they shouldn't be available through the client service.
Furthermore, e.g. the getUsers query should not be exposed through the client service.
type User {
_id: ID
name: String
email: String
confirmed: Boolean
banned: Boolean
}
type Query {
getUserById(userId: String): User
getUsers(): [User]
}
Question
Are there any best practices how to proceed with the GraphQL Schema(s) in such a case as the types are almost similar.
You can use schema directives to define authorization rules in a declarative manner directly in your Graphql schema.
A common approach would be to assign roles to a user and then use these roles to allow/block access to certain mutations or queries.
So for your example, I would imagine any request coming from the client would be made by a user with a role of client and any request coming from admin would have a user role of admin
So to build on your example of limiting the getUsers query to just admins we could add this directive to our schema:
type User {
_id: ID
name: String
email: String
confirmed: Boolean
banned: Boolean
}
type Query {
getUserById(userId: String): User
getUsers(): [User] #hasRole(roles: [admin])
}
You can read more about how to actually implement the custom directive hasRoles in the nestJs docs https://docs.nestjs.com/graphql/directives

Error Message: Instance Data is not valid whilst updating a GraphQL schema

I had an issue with updating a graphql schema and got this message (in my case I was updating a FaunaDB schema). I couldn't find reference to this online.
Although this is not the most straightforward or descriptive error message ever conceived it simply means that (in this case) I had created a record which would not fit my new schema. (In my case I had added a required field). Although I had deleted records in the specific collection I had not deleted those which referenced that collection.
I received this error because I had entered a query with the same name as a query automatically created by Fauna (or perhaps created by a previous schema?).
This is the code that caused the error:
type User {
uid: ID! #unique
}
type Query {
user(uid: ID!): User
}
The solution was to rename the query:
type Query {
findUser(uid: ID!): User
}
An alternative possible solution with FaunaDB is to override the schema (not just update the schema). This is applicable if the user() query is still in your schema, unwanted, as a result of prior schema updates.

Auto-update of apollo client cache after mutation not affecting existing queries

I have a mutation (UploadTransaction) returning certain list of certain object named Transaction.
#import "TransactionFields.gql"
mutation UploadTransaction($files: [Upload!]!) {
uploadFile(files: $files){
transactions {
...TransactionFields
}
}
}
Transaction returned from backend (graphene) has id and typename field. Hence it should automatically update Transaction in the cache. In chrome dev tools for Apollo, I can see new transactions:
I also have a query GetTransactions fetching all Transaction objects.
#import "TransactionFields.gql"
query GetTransactions {
transactions {
...TransactionFields
}
}
However I don't see newly added Transaction being returned by the query. During initial load, Apollo client loaded 292 transactions which it shows under ROOT_QUERY. It keeps returning same 292 transactions. UploadTransaction mutation add new object of type "Transaction" in cache in dev-tools without affecting ROOT_QUERY in dev-tools or my query in code.
TransactionFields.gql is
fragment TransactionFields on Transaction {
id
timestamp
description
amount
category {
id
name
}
currency
}
Any idea what am I doing wrong? I am new to apollo client and graphql
From the docs:
If a mutation updates a single existing entity, Apollo Client can automatically update that entity's value in its cache when the mutation returns. To do so, the mutation must return the id of the modified entity, along with the values of the fields that were modified. Conveniently, mutations do this by default in Apollo Client...
If a mutation modifies multiple entities, or if it creates or deletes entities, the Apollo Client cache is not automatically updated to reflect the result of the mutation. To resolve this, your call to useMutation can include an update function.
If you have a query that returns a list of entities (for example, users) and then create or delete a user, Apollo has no way of knowing that the list should be updated to reflect your mutation. The reason for this is two fold
There's no way for Apollo to know what a mutation is actually doing. All it knows is what fields you are requesting and what arguments you are passing those fields. We might assume that a mutation that includes words like "insert" or "create" is inserting something on the backend but that's not a given.
There's no way to know that inserting, deleting or updating a user should update a particular query. Your query might be for all users with the name "Bob" -- if you create a user with the name "Susan", the query shouldn't be updated to reflect that addition. Similarly, if a mutation updates a user, the query might need to be updated to reflect the change. Whether it should or not ultimately boils down to business rules that only your server knows about.
So, in order to update the cache, you have two options:
Trigger a refetch of the relevant queries. You can do this by either passing a refetchQueries option to your useMutation hook, or by manually calling refetch on those queries. Since this requires one or more additional requests to your server, it's the slower and more expensive option but can be the right option when A) you don't want to inject a bunch of business logic into your client or B) the updates to the cache are complicated and extensive.
Provide an update function to your useMutation hook that tells Apollo how to update the cache based on the results of the mutation. This saves you from making any additional requests, but does mean you have to duplicate some business logic between your server and your client.
The example of using update from the docs:
update (cache, { data: { addTodo } }) {
const { todos } = cache.readQuery({ query: GET_TODOS });
cache.writeQuery({
query: GET_TODOS,
data: { todos: todos.concat([addTodo]) },
});
}
Read the docs for additional details.

How to PUT / UPDATE nested data with GraphQL?

I'm attempting my first GraphQL backend using AWS AppSync. I'm simply trying to figure out how to use one-to-many associations. I expect to receive the many related objects as a list of children, and to be able to write some of these children when creating a new user.
type User {
id: ID!
name: String!
records: [Records!]!
}
type Records {
id: ID!
userId: ID!
title: String!
... etc ...
}
Using the AppSync interface, I click on Create Resources once to make a Records table and again to make a Users table, both in DynamoDB. This also automatically adds mutations, subscriptions, input types, and more types, to my schema, and creates resolvers for me.
What is the syntax for a mutation to create Record objects associated with my User objects? How can I PUT the Record data when I create the User?
If needed I can include more of the schema that AppSync is autogenerating.
Since you are using two DynamoDB tables (Users and Records), you will need to make two DynamoDB calls during the CreateUser mutation. One way to make two DynamoDB calls in a single mutation is to utilize DynamoDB's BatchPutItem operation.
To utilize BatchPutItem, you will need to modify the resolver which is attached to your CreateUser mutation. The resolver is responsible for taking your graphQL request, converting it into a DynamoDB operation, and then converting the results of the DynamoDB operation into a graphQL response. The resolvers have two components: a request mapping template, and a response mapping template.
The request mapping template will be responsible for taking mutation arguments and converting them into a DynamoDB BatchPutItem request.
The resolver's response mapping template will be responsible for converting the result of the DynamoDB BatchPutItem operation into your mutation's return type/structure.
Here is a tutorial on how to utilize multi-table BatchPutItem in a resolver: https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html
Here is a programming guide for using the Template language required for the resolvers: https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-programming-guide.html

Apollo graphql remote schema extending is dependent on remote fields

We are able to extend our remote schema, but with one major caveat: the new field cannot be queried for on it's own; it must be included with at least one field from the remote. Is it possible to query for just the extended field?
In the example below, I have extended "name" to include the field "catsName." If I query for "first," the query works. If I query for "catsName" and "first," the query works. If I query for just "catsName," it returns an internal server error with status code 400.
Note :
- When we extend non-remote fields, we do not have this issue.
- Our remote GraphQL engine uses Absinthe (Erlang/Elixir). We use Apollo locally. Our goal is to support the legacy Absinthe GraphQL implementation.
Working Query :
query{
user{
profile{
personal{
name{ // extend type name
catsName // New field
first // Original field
}}}}}
Non-working Query :
query{
user{
profile{
personal{
name{ // extend type name
catsName // New field
}}}}}
Error :
"message": "Field \"name\" of type \"UserPersonalName\" must have a
selection of subfields. Did you mean \"name { ... }\"?"
After further research we determined that the reason for this is that user, profile, personal, and name exist on the remote server. The extension of "catsName" exists locally. If we query for just catsName, our engine has no way of knowing that the root fields exist, and therefore returns an error. If we include at least one remote field in our query, the local engine knows to fetch the remote and is able to return all of the data.

Resources