how do you update the cache after a mutation that returns a circular type? - graphql

All of the answers I have found relate to graphql. I need to know how to update the cache on the client using apollographql.
Given this Friend type and mutation.
type Friend {
id: String
name: String
friends: [Friend]
}
type Mutation {
createFriend (
friends: [FriendInput]
): [Friend]
}
The friends array is circular. How do you represent this in the response and how do you update the clients cache?

If you're interested in the friends of a specific person, your store probably contains a bunch of Friend objects (I would actually call them Person, and friends is just a field on the Person type). For doing the mutation, it should be enough to provide the id of each friend of that new person, unless you want to create not just one person at a time in these mutations, but multiple.
For the mutation response, just include the data that you need for each friend. If you need the name and id of each of the persons friends, then include that as well. Most likely you won't need to go two levels deep, but if you want to, you can do that as well.
In Apollo Client, you don't actually need to do anything special to have this data be properly written into your store, because Apollo Client automatically normalizes by the id field and stores each friend only once. So if you're sure that you already have all the persons on the client, it will be enough to ask only for the id of each friend, so for example:
{
createFriend( friends: [{ name: 'Joe', friends: [{ id: 1}, {id: 4}] }]) {
id
name
friends {
id
name
}
}
}

Related

Nested GraphQL mutations with AWS Amplify/AppSync

I've reached out on the AWS forums but am hoping to get some attention here with a broader audience. I'm looking for any guidance on the following question.
I'll post the question below:
Hello, thanks in advance for any help.
I'm new to Amplify/GraphQL and am struggling to get mutations working. Specifically, when I add a connection to a Model, they never appear in the mock api generator. If I write them out, they say "input doesn't exist". I've searched around and people seem to say "Create the sub item before the main item and then update the main item" but I don't want that. I have a large form that has several many-to-many relationships and they all need to be valid before I can save the main form. I don't see how I can create every sub item and then the main.
However, the items are listed in the available data for the response. In the example below, addresses, shareholders, boardofdirectors are all missing in the input.
None of the fields with '#connection' appear in the create api as inputs. I'll take any help/guidance I can get. I seem to not be understanding something core here.
Here's my Model:
type Company #model(queries: { get: "getEntity", list: "listEntities" }, subscriptions: null) {
id: ID!
name: String!
president: String
vicePresident: String
secretary: String
treasurer: String
shareholders: Shareholder #connection
boardOfDirectors: BoardMember #connection
addresses: [Address]! #connection
...
}
type Address #model{
id: ID!
line1: String!
line2: String
city: String!
postalCode: String!
state: State!
type: AddressType!
}
type BoardMember #model{
id: ID!
firstName: String!
lastName: String!
email: String!
}
type Shareholder #model {
id: ID!
firstName: String!
lastName: String!
numberOfShares: String!
user: User!
}
----A day later----
I have made some progress, but still lacking some understanding of what's going on.
I have updated the schema to be:
type Company #model(queries: { get: "getEntity", list: "listEntities" }, subscriptions: null) {
id: ID!
name: String!
president: String
vicePresident: String
secretary: String
treasurer: String
...
address: Address #connection
...
}
type Address #model{
id: ID!
line1: String!
line2: String
city: String!
postalCode: String!
state: State!
type: AddressType!
}
I removed the many-to-many relationship that I was attempting and now I'm limited to a company only having 1 address. I guess that's a future problem. However, now in the list of inputs a 'CompanyAddressId' is among the list of inputs. This would indicate that it expects me to save the address before the company. Address is just 1 part of the company and I don't want to save addresses if they aren't valid and some other part of the form fails and the user quits.
I don't get why I can't write out all the fields at once? Going along with the schema above, I'll also have shareholders, boardmembers, etc. So I have to create the list of boardmembers and shareholders before I can create the company? This seems backwards.
Again, any attempt to help me figure out what I'm missing would be appreciated.
Thanks
--Edit--
What I'm seeing in explorer
-- Edit 2--
Here is the newly generated operations based off your example. You'll see that Company takes an address Id now -- which we discussed prior. But it doesn't take anything about the shareholder. In order to write out a shareholder I have to use 'createShareholder' which needs a company Id, but the company hasn't been created yet. Thoroughly confused.
#engam I'm hoping you can help out the new questions. Thank you very much!
Here are some concepts that you can try out:
For the #model directive, try it out without renaming the queries. AWS Amplify gives great names for the automatically generated queries. For example to get a company it will be getCompany and for list it will be listCompanys. If you still want to give it new names, you may change this later.
For the #connection directive:
The #connection needs to be set on both tables of the connection. Also if you want many-to-many connections you need to add a third table that handles the connections. It is also usefull to give the connection a name, when you have many connections in your schema.
Only Scalar types that you have created in the schema, standard schalars like String, Int, Float and Boolean, and AWS specific schalars (like AWSDateTime) can be used as schalars in the schema. Check out this link:
https://docs.aws.amazon.com/appsync/latest/devguide/scalars.html
Here is an example for some of what I think you want to achieve:
type Company #model {
id: ID!
name: String
president: String
vicePresident: String
secretary: String
treasurer: String
shareholders: [Shareholder] #connection(name: "CompanySharholderConnection")
address: Address #connection(name: "CompanyAdressConnection") #one to many example
# you may add more connections/attributes ...
}
# table handling many-to-many connections between users and companies, called Shareholder.
type Shareholder #model {
id: ID!
company: Company #connection(name: "CompanySharholderConnection")
user: User #connection(name: "UserShareholderConnection")
numberOfShares: Int #or String
}
type User #model {
id: ID!
firstname: String
lastname: String
company: [Shareholder] #connection(name: "UserShareholderConnection")
#... add more attributes / connections here
}
# address table, one address may have many companies
type Address #model {
id: ID!
street: String
city: String
code: String
country: String
companies: [Company] #connection(name: "CompanyAdressConnection") #many-to-one connection
}
Each of this type...#model generates a new dynamoDB table. This example will make it possible for u to create multiple companies and multiple users. To add users as shareholders to a company, you only need to create a new item in the Shareholder table, by creating a new item with the ID in of the user from the User table and the ID of the company in the Company table + adding how many shares.
Edit
Be aware that when you generate a connection between two tables, the amplify cli (which uses cloudformation to do backend changes), will generate a new global index to one or more of the dynamodb tables, so that appsync can efficient give you data.
Limitations in dynamodb, makes it only possible to generate one index (#connection) at a time, when you edit a table. I think you can do more at a time when you create a new table (#model). So when you edit one or more of your tables, only remove or add one connection at a time, between each amplify push / amplify publish. Or else cloudformation will fail when you push the changes. And that can be a mess to clean up. I have had to, multiple times, delete a whole environment because of this, luckily not in a production environment.
Update
(I also updated the Address table in the schema with som values);
To connect a new address when you are creating a new company, you will first have to create a new address item in the Address table in dynamoDb.
The mutation for this generated from appsync is probably named createAddress() and takes in a createAddressInput.
After you create the address you will recieve back the whole newly createdItem, including the automatically created ID (if you did not add one yourself).
Now you may save the new company that you are creating. One of the attributes the createCompany mutation takes is the id of the address that you created, probably named as companyAddressId. Store the address Id here. When you then retrieves your company with either getCompany or listCompanys you will get the address of your company.
Javascript example:
const createCompany = async (address, company) => {
// api is name of the service with the mutations and queries
try {
const newaddress = await this.api.createAddress({street: address.street, city: address.city, country: address.country});
const newcompany = await this.api.createCompany({
name: company.name,
president: company.president,
...
companyAddressId: newaddress.id
})
} catch(error) {
throw error
}
}
// and to retrieve the company including the address, you have to update your graphql statement for your query:
const statement = `query ListCompanys($filter: ModelPartiFilterInput, $limit: Int, $nextToken: String) {
listCompanys(filter: $filter, limit: $limit, nextToken: $nextToken) {
__typename
id
name
president
...
address {
__typename
id
street
city
code
country
}
}
}
`
AppSync will now retrive all your company (dependent on your filter and limit) and the addresses of those companies you have connected an address to.
Edit 2
Each type with #model is a referance to a dynamoDb table in aws. So when you are creating a one-to-many relationship between two tables, when both items are new you first have to create the the 'many' in the one-to-many realationships. In the dynamoDb Company tables when an address can have many companies, and one company only can have one address, you have to store the id (dynamoDB primary key) for the address on the company. You could of course generate the address id in frontend, and using that for the id of the address and the same for the addressCompanyId in for the company and use await Promise.all([createAddress(...),createCompany(...)) but then if one fails the other one will be created (but generally appsync api's are very stable, so if the data you send is correct it won't fail).
Another solution, if you generally don't wont to have to create/update multiple items in multiple tables, you could store the address directly in the company item.
type Company #model {
name: String
...
address: Address # or [Address] if you want more than one Address on the company
}
type Address {
street: String
postcode: String
city: string
}
Then the Address type will be part of the same item in the same table in dynamoDb. But you will loose the ability to do queries on addresses (or shareholders) to look up a address and see which companies are located there (or simulary look up a person and see which companies that person has a share in). Generally i don't like this method because it locks your application to one specific thing and it's harder to create new features later on.
As far as I'm aware of, it is not possible to create multiple items in multiple dynamoDb tables in one graphql (Amplify/AppSync) mutation. So async await with Promise.all() and you manually generate the id attributes frontendside before creating the items might be your best option.

How to use same generated ID in two fields prisma-graphql

I'm implementing a graphql prisma datamodel. Here I have a type called BankAccount . I may need to update and delete them as well. I'm implementing this as immutable object. So, when updating I'm adding updating the existing record as IsDeleted and add a new record. And when updating an existing record I need to keep the id of the previous record to know which record is updated. So, I've came up with a type like this
type BankAccount {
id: ID! #unique
parentbankAccount: String!
bankName: String!
bankAccountNo: String!
isDeleted: Boolean! #default(value: "false")
}
Here the parentBankAccount keeps the id of previous BankAccount. I'm thinking when creating a bank account, setting the parentBankAccount as same as the id as it doesn't have a parent. The thing is I'm not sure it's possible. I'm bit new to GraphQL. So, any help would be appreciated.
Thanks
In GraphQL, generally if one object refers to another, you should directly refer to that object; you wouldn't embed its ID. You can also make fields nullable, to support the case where some relationship just doesn't exist. For this specific field, then, this would look like
type BankAccount {
parentBankAccount: BankAccount
...
}
and that field would be null whenever an account doesn't have a parent.
At an API level, the layout you describe seems a little weird. If I call
query MyBankAccount {
me { accounts { id } }
}
I'll get back some unique ID. I'd be a little surprised to later call
query MyBalance($id: ID!) {
node(id: $id) {
... on BankAccount {
name
isDeleted
balance
}
}
}
and find out that my account has been "deleted" and that the balance is from a week ago.
Using immutable objects in the underlying data store makes some sense, particularly for auditability reasons, but that tends to not be something you can expose out through a GraphQL API directly (or most other API layers: this would be equally surprising in a REST framework where the object URL is supposed to be permanent).

AWS Appsync: How can I create a resolver that retrieves the details for an array of identifiers?

This feels basic, so I would expect to find this scenario mentioned, but I have searched and can't find an example that matches my scenario. I have 2 end points (I am using HTTP data sources) that I'm trying to combine.
Class:
{
id: string,
students: [
<studentID1>,
<studentID2>,
...
]
}
and Student:
{
id: String,
lastName: String
}
What I would like is a schema that looks like this:
Student: {
id: ID!
lastName: String
}
Class: {
id: ID!,
studentDetails: [Student]
}
From reading, I know that I need some sort of resolver on Class.studentDetails that will return an array/List of student objects. Most of the examples I have seen show retrieving the list of Students based on class ID (ctx.source.id), but that won't work in this case. I need to call the students endpoint 1 time per student, passing in the student ID (I cannot fetch the list of students by class ID).
Is there a way to write a resolver for Class/studentDetails that loops through the student IDs in Class and calls my students endpoint for each one?
I was thinking something like this in the Request Mapping Template:
#set($studentDetails = [])
#foreach($student in $ctx.source.students)
#util.qr(list.add(...invoke web service to get student details...))
#end
$studentDetails
Edit: After reading Lisa Shon's comment below, I realized that the batch resolver for DynamoDB data sources that does this, but I don't see a way to do that for HTTP data sources.
It's not ideal, but you can create an intermediate type.
type Student {
id: ID!
lastName: String
}
type Class {
id: ID!,
studentDetails: [StudentDetails]
}
type StudentDetails {
student: Student
}
In your resolver template for Class, create a list of those student ids
#foreach ($student in $class.students)
$util.qr($studentDetails.add({"id": "$student.id"}))
#end
and add it to your response object. Then, hook a resolver to the student field of StudentDetails and you will then be able to use $context.source.id for the individual student API call. Each id will be broken out of the array and be its own web request.
I opened a case with AWS Support and was told that the only way they know to do this is to create a Lambda Resolver that:
Takes an array of student IDs
Calls the students endpoint for each one
Returns an array of student details information
Instead of calling your student endpoint in the response, use a pipeline resolver and stitch the response from different steps using stash, context (prev.result/result), etc.

Apollo/GraphQL field type for object with dynamic keys

Let's say my graphql server wants to fetch the following data as JSON where person3 and person5 are some id's:
"persons": {
"person3": {
"id": "person3",
"name": "Mike"
},
"person5": {
"id": "person5",
"name": "Lisa"
}
}
Question: How to create the schema type definition with apollo?
The keys person3 and person5 here are dynamically generated depending on my query (i.e. the area used in the query). So at another time I might get person1, person2, person3 returned.
As you see persons is not an Iterable, so the following won't work as a graphql type definition I did with apollo:
type Person {
id: String
name: String
}
type Query {
persons(area: String): [Person]
}
The keys in the persons object may always be different.
One solution of course would be to transform the incoming JSON data to use an array for persons, but is there no way to work with the data as such?
GraphQL relies on both the server and the client knowing ahead of time what fields are available available for each type. In some cases, the client can discover those fields (via introspection), but for the server, they always need to be known ahead of time. So to somehow dynamically generate those fields based on the returned data is not really possible.
You could utilize a custom JSON scalar (graphql-type-json module) and return that for your query:
type Query {
persons(area: String): JSON
}
By utilizing JSON, you bypass the requirement for the returned data to fit any specific structure, so you can send back whatever you want as long it's properly formatted JSON.
Of course, there's significant disadvantages in doing this. For example, you lose the safety net provided by the type(s) you would have previously used (literally any structure could be returned, and if you're returning the wrong one, you won't find out about it until the client tries to use it and fails). You also lose the ability to use resolvers for any fields within the returned data.
But... your funeral :)
As an aside, I would consider flattening out the data into an array (like you suggested in your question) before sending it back to the client. If you're writing the client code, and working with a dynamically-sized list of customers, chances are an array will be much easier to work with rather than an object keyed by id. If you're using React, for example, and displaying a component for each customer, you'll end up converting that object to an array to map it anyway. In designing your API, I would make client usability a higher consideration than avoiding additional processing of your data.
You can write your own GraphQLScalarType and precisely describe your object and your dynamic keys, what you allow and what you do not allow or transform.
See https://graphql.org/graphql-js/type/#graphqlscalartype
You can have a look at taion/graphql-type-json where he creates a Scalar that allows and transforms any kind of content:
https://github.com/taion/graphql-type-json/blob/master/src/index.js
I had a similar problem with dynamic keys in a schema, and ended up going with a solution like this:
query lookupPersons {
persons {
personKeys
person3: personValue(key: "person3") {
id
name
}
}
}
returns:
{
data: {
persons: {
personKeys: ["person1", "person2", "person3"]
person3: {
id: "person3"
name: "Mike"
}
}
}
}
by shifting the complexity to the query, it simplifies the response shape.
the advantage compared to the JSON approach is it doesn't need any deserialisation from the client
Additional info for Venryx: a possible schema to fit my query looks like this:
type Person {
id: String
name: String
}
type PersonsResult {
personKeys: [String]
personValue(key: String): Person
}
type Query {
persons(area: String): PersonsResult
}
As an aside, if your data set for persons gets large enough, you're going to probably want pagination on personKeys as well, at which point, you should look into https://relay.dev/graphql/connections.htm

GraphQL: Are either of these two patterns better/worse?

I'm relatively new to GraphQL, and I've noticed that you can select related fields in one of two different ways. Let's say we have a droids table and a humans table, and droids have an owner which is a record in the humans table. There's (at least) two ways you can express this:
query DroidsQuery {
id
name
owner {
id
}
}
or:
query DroidsQuery {
id
name
ownerId # this resolves to owner.id
}
At first glance the former seems more idiomatic, and obviously if you're selecting multiple fields it has advantages (owner { id name } vs. having to make a new ownerName so you can do ownerId ownerName). However, there's a certain explicitness to the ownerId style, as you're expressing "here's this thing I specifically expected you to select".
Also, from an implementation standpoint, it seems like owner { id } would lend itself to the resolver making an unnecessary JOIN, as it would translate owner { id } as the id column of the humans table (vs. an ownerId field which, with its own resolver, knows it doesn't need a JOIN to get the owner_id column of the droids table).
As I said, I'm new to GraphQL, so I'm sure there's plenty of nuances to this question that I'd appreciate if I'd been using it longer. Therefore, I was hoping for insight from someone who has used GraphQL into the upsides/downsides of either approach. And just to be clear (and to avoid having this answer closed) I'm looking for explicit "here's what is objectively bad/good about one approach over the other", not subjective "I prefer one approach" answers.
You should understand GraphQL is just a query language + execution semantics. There are no restrictions on how you present your data and how you resolve your data.
Nothing stops you from doing what you describe, and returning both owner object and ownerId.
type Droid {
id: ID!
name: String!
owner: Human! # use it when you want to expand owner detail
ownerId: ID! # use it when you just want to get id of owner
}
You already pointed out the main problem: the former implementation seems more idiomatic. No you don't make a idiomatic code, you make practical code.
A real world example as you design field pagination in GraphQL:
type Droid {
id: ID!
name: String!
friends(first: Int, after: String): [Human]
}
The first time, you query a droid + friends, and it is fine.
{
query DroidsQuery {
id
name
friends(first: 2) {
name
}
}
}
Then, you click more to load more friends; it hits DroidsQuery one more time to query the previous droid object before resolving the next friends:
{
query DroidsQuery {
id
friends(first: 2, after: "dfasdf") {
name
}
}
}
So it is practical to have another DroidFriendsQuery query to directly resolve friends from droid id.

Resources