How to PUT / UPDATE nested data with GraphQL? - graphql

I'm attempting my first GraphQL backend using AWS AppSync. I'm simply trying to figure out how to use one-to-many associations. I expect to receive the many related objects as a list of children, and to be able to write some of these children when creating a new user.
type User {
id: ID!
name: String!
records: [Records!]!
}
type Records {
id: ID!
userId: ID!
title: String!
... etc ...
}
Using the AppSync interface, I click on Create Resources once to make a Records table and again to make a Users table, both in DynamoDB. This also automatically adds mutations, subscriptions, input types, and more types, to my schema, and creates resolvers for me.
What is the syntax for a mutation to create Record objects associated with my User objects? How can I PUT the Record data when I create the User?
If needed I can include more of the schema that AppSync is autogenerating.

Since you are using two DynamoDB tables (Users and Records), you will need to make two DynamoDB calls during the CreateUser mutation. One way to make two DynamoDB calls in a single mutation is to utilize DynamoDB's BatchPutItem operation.
To utilize BatchPutItem, you will need to modify the resolver which is attached to your CreateUser mutation. The resolver is responsible for taking your graphQL request, converting it into a DynamoDB operation, and then converting the results of the DynamoDB operation into a graphQL response. The resolvers have two components: a request mapping template, and a response mapping template.
The request mapping template will be responsible for taking mutation arguments and converting them into a DynamoDB BatchPutItem request.
The resolver's response mapping template will be responsible for converting the result of the DynamoDB BatchPutItem operation into your mutation's return type/structure.
Here is a tutorial on how to utilize multi-table BatchPutItem in a resolver: https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html
Here is a programming guide for using the Template language required for the resolvers: https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-programming-guide.html

Related

Conceptual Question: Shared GraphQL schema for multiple endpoints (client/admin)

Context
I am using a NX Workspace to organize two different angular frontends (client & admin). To separate client and admin logic, two different NestJS backend services including GraphQL are used for client and admin.
As both services fetch data from a single MongoDB a single database library is used for both frontends.
Both backend services currently use a single GraphQL Schema generated through schema-first approach and a single database layer. In most cases the types and fields definition matches between client and admin, but in some cases a single service requires additional query arguments or fields.
For example, the admin service depends on the fields confirmed or banned of type User while they shouldn't be available through the client service.
Furthermore, e.g. the getUsers query should not be exposed through the client service.
type User {
_id: ID
name: String
email: String
confirmed: Boolean
banned: Boolean
}
type Query {
getUserById(userId: String): User
getUsers(): [User]
}
Question
Are there any best practices how to proceed with the GraphQL Schema(s) in such a case as the types are almost similar.
You can use schema directives to define authorization rules in a declarative manner directly in your Graphql schema.
A common approach would be to assign roles to a user and then use these roles to allow/block access to certain mutations or queries.
So for your example, I would imagine any request coming from the client would be made by a user with a role of client and any request coming from admin would have a user role of admin
So to build on your example of limiting the getUsers query to just admins we could add this directive to our schema:
type User {
_id: ID
name: String
email: String
confirmed: Boolean
banned: Boolean
}
type Query {
getUserById(userId: String): User
getUsers(): [User] #hasRole(roles: [admin])
}
You can read more about how to actually implement the custom directive hasRoles in the nestJs docs https://docs.nestjs.com/graphql/directives

Auto-update of apollo client cache after mutation not affecting existing queries

I have a mutation (UploadTransaction) returning certain list of certain object named Transaction.
#import "TransactionFields.gql"
mutation UploadTransaction($files: [Upload!]!) {
uploadFile(files: $files){
transactions {
...TransactionFields
}
}
}
Transaction returned from backend (graphene) has id and typename field. Hence it should automatically update Transaction in the cache. In chrome dev tools for Apollo, I can see new transactions:
I also have a query GetTransactions fetching all Transaction objects.
#import "TransactionFields.gql"
query GetTransactions {
transactions {
...TransactionFields
}
}
However I don't see newly added Transaction being returned by the query. During initial load, Apollo client loaded 292 transactions which it shows under ROOT_QUERY. It keeps returning same 292 transactions. UploadTransaction mutation add new object of type "Transaction" in cache in dev-tools without affecting ROOT_QUERY in dev-tools or my query in code.
TransactionFields.gql is
fragment TransactionFields on Transaction {
id
timestamp
description
amount
category {
id
name
}
currency
}
Any idea what am I doing wrong? I am new to apollo client and graphql
From the docs:
If a mutation updates a single existing entity, Apollo Client can automatically update that entity's value in its cache when the mutation returns. To do so, the mutation must return the id of the modified entity, along with the values of the fields that were modified. Conveniently, mutations do this by default in Apollo Client...
If a mutation modifies multiple entities, or if it creates or deletes entities, the Apollo Client cache is not automatically updated to reflect the result of the mutation. To resolve this, your call to useMutation can include an update function.
If you have a query that returns a list of entities (for example, users) and then create or delete a user, Apollo has no way of knowing that the list should be updated to reflect your mutation. The reason for this is two fold
There's no way for Apollo to know what a mutation is actually doing. All it knows is what fields you are requesting and what arguments you are passing those fields. We might assume that a mutation that includes words like "insert" or "create" is inserting something on the backend but that's not a given.
There's no way to know that inserting, deleting or updating a user should update a particular query. Your query might be for all users with the name "Bob" -- if you create a user with the name "Susan", the query shouldn't be updated to reflect that addition. Similarly, if a mutation updates a user, the query might need to be updated to reflect the change. Whether it should or not ultimately boils down to business rules that only your server knows about.
So, in order to update the cache, you have two options:
Trigger a refetch of the relevant queries. You can do this by either passing a refetchQueries option to your useMutation hook, or by manually calling refetch on those queries. Since this requires one or more additional requests to your server, it's the slower and more expensive option but can be the right option when A) you don't want to inject a bunch of business logic into your client or B) the updates to the cache are complicated and extensive.
Provide an update function to your useMutation hook that tells Apollo how to update the cache based on the results of the mutation. This saves you from making any additional requests, but does mean you have to duplicate some business logic between your server and your client.
The example of using update from the docs:
update (cache, { data: { addTodo } }) {
const { todos } = cache.readQuery({ query: GET_TODOS });
cache.writeQuery({
query: GET_TODOS,
data: { todos: todos.concat([addTodo]) },
});
}
Read the docs for additional details.

Update Apollo cache after object creation

What are all the different ways of updating the Apollo InMemoryCache after a mutation? From the docs, I can see:
Id-based updates which Apollo performs automatically
Happens for single updates to existing objects only.
Requires an id field which uniquely identifies each object, or the cache must be configured with a dataIdFromObject function which provides a unique identifier.
"Manual" cache updates via update functions
Required for object creation, deletion, or updates of multiple objects.
Involves calling cache.writeQuery with details including which query should be affected and how the cache should be changed.
Passing the refetchQueries option to the useMutation hook
The calling code says which queries should be re-fetched from the API, Apollo does the fetching, and the results replace whatever is in the cache for the given queries.
Are there other ways that I've missed, or have I misunderstood anything about the above methods?
I am confused because I've been reading the code of a project which uses Apollo for all kinds of mutations, including creations and deletions, but I don't see any calls to cache.writeQuery, nor any usage of refetchQueries. How does the cache get updated after creations and deletions without either of those?
In my own limited experience with Apollo, the cache is not automatically updated after an object creation or deletion, not even if I define dataIdFromObject. I have to update the cache myself by writing update functions.
So I'm wondering if there is some secret config I've missed to make Apollo handle it for me.
The only way to create or delete a node and have Apollo automatically update the cache to reflect the change is to return the parent field of whatever field contains the updated List field. For example, let's say we have a schema like this:
type Query {
me: User
}
type User {
id: ID!
posts: [Post!]!
}
type Post {
id: ID!
body: String!
}
By convention, if we had a mutation to add a new post, the mutation field would return the created post.
type Mutation {
writePost(body: String!): Post!
}
However, we could have it return the logged in User instead (the same thing the me field returns):
type Mutation {
writePost(body: String!): User!
}
by doing so, we enable the client to make a query like:
mutation WritePost($body: String!){
writePost(body: $body) {
id
posts {
id
body
}
}
}
Here Apollo will not only create or update the cache for all the returned posts, but it will also update the returned User object, including the list of posts.
So why is this not commonly done? Why does Apollo's documentation suggest using writeQuery when adding or deleting nodes?
The above will work fine when your schema is simple and you're working with a relatively small amount of data. However, returning the entire parent node, including all its relations, can be noticeably slower and more resource-intensive once you're dealing with more data. Additionally, in many apps a single mutation could impact multiple queries inside the cache. The same node could be returned by any number of fields in the schema, and even the same field could be part of a number of different queries that utilize different filters, sort parameters, etc.
These factors make it unlikely that you'll want to implement this pattern in production but there certainly are use cases where it may be a valid option.

Nested mutation GraphQL

I'm using AWS Appsync and Amplify.
A snippet of my GraphQL schema look like this:
type Recipe
#model
#auth(rules: [{allow: owner}])
{
id: ID!
title: String!
key: String!
courses: [Course!]!
}
type Course
#model
#auth(rules: [{allow: owner}])
{
id: ID!
name: String!
}
On amplify push it creates the DynamoDB tables Recipe and Course
After reading many tutorials I still don't get it how to add a recipe in GraphiQL.
How can i insert a new Recipe that has a reference to a course and avoid duplicates in the Courses table?
To create multiples Recipe referencing the same Course without duplicates in the Course table, you need to design a many-to-many relationship.
So far the relationship you have designed is not enough for AppSync to understand, you are missing #connection attributes. You can read this answer on github to have an explanation of how to design this many-to-many relation in AppSync
After designing the relation, you will use a mutation to insert data, and it's likely that AppSync will generate the mutation code for you (if not, use amplify codegen in the console). You will then be able to create data.
Since you use DynamoDB with multiple tables (default mode for amplify / AppSync), you will have to either :
Call multiple mutations in a row
Use a custom resolver, as described in this SO answer

How to query Apollo GraphQL server with a specific context?

I am writing an Apollo GraphQL API that returns product information from various brands. A simplified version of the schema looks like this:
type Query {
products: [Product]!
}
type Product {
name: String!
brand: String!
}
I want to be able to query products from a specific brand. Normally this would be simple to achieve by adding a brand argument to the Product object:
type Query {
products(brand: String!): [Product]!
}
However, I have multiple GraphQL clients in different apps and each is associated with a specific brand so it seems redundant to always pass the same brand argument in every query. I also have many other objects in my schema (orders, transactions, etc.) that are specific to a brand and would require a brand argument.
Furthermore, my resolvers need to query a different API depending on the brand so even objects in my schema such as User, which are conceptually unrelated to a brand, would potentially need a brand argument so that the resolver knows which API to fetch from.
Is there a way to set the brand context for each client and have this context received by the server? Or maybe there is a better way to achieve this brand separation?
I would probably make Brand be a first-class type in your GraphQL query. That doesn't save you from having to qualify many of the queries you describe by a specific brand, but it at least gives you a common place to start from. Then you'd wind up with an API somewhat like:
type Query {
brand(name: String!): Brand
allProducts: [Product!]!
}
type Brand {
name: String!
products: [Product!]!
# users: [User!]!
}
type Product {
name: String!
brand: Brand! # typical, but not important to your question
}
If the differences between kinds of brands are visible at the API layer, you also could consider using a GraphQL interface to describe the set of fields that all brands have, but actually return a more specific type from the resolver.
The way you describe your application, it could also make sense to run one copy of the service for each brand, each with a different GraphQL endpoint. That would let you straightforwardly parameterize the per-brand internal object configuration and make the "current brand" be process-global context. The big constraints here are that, at a GraphQL level, one brand's objects can never refer to another, and if you have a lot of brands, you need some good way to run a lot of servers.

Resources