consider following 2 GraphQL TypeObjects.
type Student {
id: ID!
name: String
city: String
pin : String
phoneModel: String
}
Assume that we have a dataloader registered on phoneModel. Because phoneModel information is fetched from a different datasource. So, to avoid N + 1 issue, for each student record (id), we batch phoneModel request and finally send out one single query to get phoneModels.
type Employee {
id: ID!
name: String
city: String
pin : String
phoneModel: String
}
Assume that phoneModel database here is completely different from above Student's Phone database.
And phoneModel information is fetched from a different remote database.
I want to register dataloader on 'phoneModel' here too.
Problem:
dataloader.register(String Key, DataLoader BatchLoader);
For student object, it is registered as:
dataloader.register("phoneModel", StudentPhoneBatchLoader);
For Employee object, how do I register a dataloader without overwriting Student's dataloader.
dataloader.register("phoneModel", EmployeePhoneBatchLoader);
I guess the workaround is to register dataloaders with unique-keys.
dataloader.register("StudentPhoneModel", StudentPhoneBatchLoader);
dataloader.register("EmployeePhoneModel", EmployeePhoneBatchLoader);
And when resolving field, do:
dataloader.getDataLoader("StudentPhoneModel") and
dataloader.getDataLoader("EmployeePhoneModel")
Related
Consider this model:
type Address {
id: ID!
}
type Person {
id: ID!
address: Address
}
I'm writing the resolver for Person->address to get the address of the person, which is in another DB table.
In order to do that, I need the addressId but I don't have it. The addressId property was available inside the Person resolver, but that was lost, since my GraphQL schema doesn't have an addressId.
How do I get the addressId inside the Person.address resolver?
PS:
One way I can do it, is to add an addressId to Person, but if I do that, I would have to add sibling IDs to all foreign keys in my GraphQL schema and this seems off. If that's the way to go, though, I'd do it
The way I ended up solving this problem was to pair any child graph node with an ID property.
Example:
type Person {
addressId: String
address: Address
occupationId:String
ocupation: Ocupation
}
This way, my resolver function for Person would always resolve addressId and occupationId but the child resolvers would kick in for address and occupation. Inside these resolvers, parent would be Person and I would be able to read the addressId and occupationId.
I currently have a schema like :
type RawData
#model
{
id: ID!
controller: Controller #connection(name: "RawDataController")
createdAt: String!
notes: String
}
type Controller
#model
{
id: ID!
controller: Controller #connection(name: "RawDataController")
notes: String
}
This works but I now have so much RawData for each Controller that I end up pulling too much data which causes pagination issues. What I want to do is get all of the Controller's Raw Data for today. I want to change the #connection into a RawSensor ID and createdAt GSI.
I tried adding a #key
#key (name: "ByControllerCollectedAt", fields: ["controller", "collectedAt"], queryField: "rawDataByCollectedAt")
but I get an error :
Expected scalar and got Controller
An error occurred during the push operation: Expected scalar and got Controller.
So then I tried to create a key on the rawDataControllerId field which is created in DynamoDB underneath AppSync's abstraction and currently contains the relevant controller id.
You cannot specify a nonexistent field 'rawDataControllerId' in #key 'ByControllerCollectedAt' on type 'RawData'.
I'm completely stuck, can anybody tell me how I can turn my current connection into one with a sort key?
I've reached out on the AWS forums but am hoping to get some attention here with a broader audience. I'm looking for any guidance on the following question.
I'll post the question below:
Hello, thanks in advance for any help.
I'm new to Amplify/GraphQL and am struggling to get mutations working. Specifically, when I add a connection to a Model, they never appear in the mock api generator. If I write them out, they say "input doesn't exist". I've searched around and people seem to say "Create the sub item before the main item and then update the main item" but I don't want that. I have a large form that has several many-to-many relationships and they all need to be valid before I can save the main form. I don't see how I can create every sub item and then the main.
However, the items are listed in the available data for the response. In the example below, addresses, shareholders, boardofdirectors are all missing in the input.
None of the fields with '#connection' appear in the create api as inputs. I'll take any help/guidance I can get. I seem to not be understanding something core here.
Here's my Model:
type Company #model(queries: { get: "getEntity", list: "listEntities" }, subscriptions: null) {
id: ID!
name: String!
president: String
vicePresident: String
secretary: String
treasurer: String
shareholders: Shareholder #connection
boardOfDirectors: BoardMember #connection
addresses: [Address]! #connection
...
}
type Address #model{
id: ID!
line1: String!
line2: String
city: String!
postalCode: String!
state: State!
type: AddressType!
}
type BoardMember #model{
id: ID!
firstName: String!
lastName: String!
email: String!
}
type Shareholder #model {
id: ID!
firstName: String!
lastName: String!
numberOfShares: String!
user: User!
}
----A day later----
I have made some progress, but still lacking some understanding of what's going on.
I have updated the schema to be:
type Company #model(queries: { get: "getEntity", list: "listEntities" }, subscriptions: null) {
id: ID!
name: String!
president: String
vicePresident: String
secretary: String
treasurer: String
...
address: Address #connection
...
}
type Address #model{
id: ID!
line1: String!
line2: String
city: String!
postalCode: String!
state: State!
type: AddressType!
}
I removed the many-to-many relationship that I was attempting and now I'm limited to a company only having 1 address. I guess that's a future problem. However, now in the list of inputs a 'CompanyAddressId' is among the list of inputs. This would indicate that it expects me to save the address before the company. Address is just 1 part of the company and I don't want to save addresses if they aren't valid and some other part of the form fails and the user quits.
I don't get why I can't write out all the fields at once? Going along with the schema above, I'll also have shareholders, boardmembers, etc. So I have to create the list of boardmembers and shareholders before I can create the company? This seems backwards.
Again, any attempt to help me figure out what I'm missing would be appreciated.
Thanks
--Edit--
What I'm seeing in explorer
-- Edit 2--
Here is the newly generated operations based off your example. You'll see that Company takes an address Id now -- which we discussed prior. But it doesn't take anything about the shareholder. In order to write out a shareholder I have to use 'createShareholder' which needs a company Id, but the company hasn't been created yet. Thoroughly confused.
#engam I'm hoping you can help out the new questions. Thank you very much!
Here are some concepts that you can try out:
For the #model directive, try it out without renaming the queries. AWS Amplify gives great names for the automatically generated queries. For example to get a company it will be getCompany and for list it will be listCompanys. If you still want to give it new names, you may change this later.
For the #connection directive:
The #connection needs to be set on both tables of the connection. Also if you want many-to-many connections you need to add a third table that handles the connections. It is also usefull to give the connection a name, when you have many connections in your schema.
Only Scalar types that you have created in the schema, standard schalars like String, Int, Float and Boolean, and AWS specific schalars (like AWSDateTime) can be used as schalars in the schema. Check out this link:
https://docs.aws.amazon.com/appsync/latest/devguide/scalars.html
Here is an example for some of what I think you want to achieve:
type Company #model {
id: ID!
name: String
president: String
vicePresident: String
secretary: String
treasurer: String
shareholders: [Shareholder] #connection(name: "CompanySharholderConnection")
address: Address #connection(name: "CompanyAdressConnection") #one to many example
# you may add more connections/attributes ...
}
# table handling many-to-many connections between users and companies, called Shareholder.
type Shareholder #model {
id: ID!
company: Company #connection(name: "CompanySharholderConnection")
user: User #connection(name: "UserShareholderConnection")
numberOfShares: Int #or String
}
type User #model {
id: ID!
firstname: String
lastname: String
company: [Shareholder] #connection(name: "UserShareholderConnection")
#... add more attributes / connections here
}
# address table, one address may have many companies
type Address #model {
id: ID!
street: String
city: String
code: String
country: String
companies: [Company] #connection(name: "CompanyAdressConnection") #many-to-one connection
}
Each of this type...#model generates a new dynamoDB table. This example will make it possible for u to create multiple companies and multiple users. To add users as shareholders to a company, you only need to create a new item in the Shareholder table, by creating a new item with the ID in of the user from the User table and the ID of the company in the Company table + adding how many shares.
Edit
Be aware that when you generate a connection between two tables, the amplify cli (which uses cloudformation to do backend changes), will generate a new global index to one or more of the dynamodb tables, so that appsync can efficient give you data.
Limitations in dynamodb, makes it only possible to generate one index (#connection) at a time, when you edit a table. I think you can do more at a time when you create a new table (#model). So when you edit one or more of your tables, only remove or add one connection at a time, between each amplify push / amplify publish. Or else cloudformation will fail when you push the changes. And that can be a mess to clean up. I have had to, multiple times, delete a whole environment because of this, luckily not in a production environment.
Update
(I also updated the Address table in the schema with som values);
To connect a new address when you are creating a new company, you will first have to create a new address item in the Address table in dynamoDb.
The mutation for this generated from appsync is probably named createAddress() and takes in a createAddressInput.
After you create the address you will recieve back the whole newly createdItem, including the automatically created ID (if you did not add one yourself).
Now you may save the new company that you are creating. One of the attributes the createCompany mutation takes is the id of the address that you created, probably named as companyAddressId. Store the address Id here. When you then retrieves your company with either getCompany or listCompanys you will get the address of your company.
Javascript example:
const createCompany = async (address, company) => {
// api is name of the service with the mutations and queries
try {
const newaddress = await this.api.createAddress({street: address.street, city: address.city, country: address.country});
const newcompany = await this.api.createCompany({
name: company.name,
president: company.president,
...
companyAddressId: newaddress.id
})
} catch(error) {
throw error
}
}
// and to retrieve the company including the address, you have to update your graphql statement for your query:
const statement = `query ListCompanys($filter: ModelPartiFilterInput, $limit: Int, $nextToken: String) {
listCompanys(filter: $filter, limit: $limit, nextToken: $nextToken) {
__typename
id
name
president
...
address {
__typename
id
street
city
code
country
}
}
}
`
AppSync will now retrive all your company (dependent on your filter and limit) and the addresses of those companies you have connected an address to.
Edit 2
Each type with #model is a referance to a dynamoDb table in aws. So when you are creating a one-to-many relationship between two tables, when both items are new you first have to create the the 'many' in the one-to-many realationships. In the dynamoDb Company tables when an address can have many companies, and one company only can have one address, you have to store the id (dynamoDB primary key) for the address on the company. You could of course generate the address id in frontend, and using that for the id of the address and the same for the addressCompanyId in for the company and use await Promise.all([createAddress(...),createCompany(...)) but then if one fails the other one will be created (but generally appsync api's are very stable, so if the data you send is correct it won't fail).
Another solution, if you generally don't wont to have to create/update multiple items in multiple tables, you could store the address directly in the company item.
type Company #model {
name: String
...
address: Address # or [Address] if you want more than one Address on the company
}
type Address {
street: String
postcode: String
city: string
}
Then the Address type will be part of the same item in the same table in dynamoDb. But you will loose the ability to do queries on addresses (or shareholders) to look up a address and see which companies are located there (or simulary look up a person and see which companies that person has a share in). Generally i don't like this method because it locks your application to one specific thing and it's harder to create new features later on.
As far as I'm aware of, it is not possible to create multiple items in multiple dynamoDb tables in one graphql (Amplify/AppSync) mutation. So async await with Promise.all() and you manually generate the id attributes frontendside before creating the items might be your best option.
Normally I wouldn't try and create a relationship between primary keys within my Amplify Schema, though I am trying to recreate a friends code so that I can regularly deploy it with Amplify, hence I don't really have an option in this case.
My question is that I would like to create a link between these two Primary keys and was wondering if there is a way to do that, I have already followed the documentation here as well.
Ideally I would like to have my schema.graphql file look like this:
type ShoppingList #model #key(fields: ["UPC"]) {
UPC: Products #connection
quantity: Int
timestamp: Int
}
type Products #model #key(fields: ["UPC"]) {
UPC: String!
Description: String
Name: String
Price: Float
ProductId: String
combinedSeaarchKey: String
Img_URL: String
LongDescription: String
UserForRecommendations: Boolean
Hidden: Boolean
TrainingImageWidthInches: Float
}
When trying to deploy this, I get the error "Expected scalar and got Products".
Ideally I want to have the schema the same as well, since I don't want to go re-writing my friends client side application, and would rather try and fix it in the schema!
Hence any help would be greatly appreciated! Thanks
Was looking for a solution to the same general issue, came across your post, but then solved my issue. My issue was a little unrelated, I was trying to sort on a non-scalar field. In your case, you're receiving that error by trying to make a key out of a non-scalar entity. Remove that #key from ShoppingList and you should clear your error, but let's talk through what I believe you're trying to achieve.
I assume you're trying to make a 1:Many relationship between ShoppingList and Products.
In your ShoppingList code, you have Products as a single entity but likely meant to have an array of Products:
UPC: [Products]
From there you need to define your connection between UPC and Products. You correctly called out the use of #connection, but didn't create the connection. To create the connection in a 1:Many relationship, you're going to want 1 ShoppingList and many Products. To achieve this, you likely want the following:
type ShoppingList #model {
id: ID! #make sure you're specifying a PK for each object
UPC: [Products] #connection(keyName: "relatedShoppingList" fields: ["id"])
quantity: Int
timestamp: Int
}
type Products #model {
id: ID!
parentShoppingList: ShoppingList #connection(fields: "relatedShoppingList")
UPC: String!
Description: String
Name: String
Price: Float
ProductId: String
combinedSearchKey: String
Img_URL: String
LongDescription: String
UserForRecommendations: Boolean
Hidden: Boolean
TrainingImageWidthInches: Float
}
I foresee some additional issues with your data setup, but this should unblock your 1:many relationship between products and shopping lists.
This feels basic, so I would expect to find this scenario mentioned, but I have searched and can't find an example that matches my scenario. I have 2 end points (I am using HTTP data sources) that I'm trying to combine.
Class:
{
id: string,
students: [
<studentID1>,
<studentID2>,
...
]
}
and Student:
{
id: String,
lastName: String
}
What I would like is a schema that looks like this:
Student: {
id: ID!
lastName: String
}
Class: {
id: ID!,
studentDetails: [Student]
}
From reading, I know that I need some sort of resolver on Class.studentDetails that will return an array/List of student objects. Most of the examples I have seen show retrieving the list of Students based on class ID (ctx.source.id), but that won't work in this case. I need to call the students endpoint 1 time per student, passing in the student ID (I cannot fetch the list of students by class ID).
Is there a way to write a resolver for Class/studentDetails that loops through the student IDs in Class and calls my students endpoint for each one?
I was thinking something like this in the Request Mapping Template:
#set($studentDetails = [])
#foreach($student in $ctx.source.students)
#util.qr(list.add(...invoke web service to get student details...))
#end
$studentDetails
Edit: After reading Lisa Shon's comment below, I realized that the batch resolver for DynamoDB data sources that does this, but I don't see a way to do that for HTTP data sources.
It's not ideal, but you can create an intermediate type.
type Student {
id: ID!
lastName: String
}
type Class {
id: ID!,
studentDetails: [StudentDetails]
}
type StudentDetails {
student: Student
}
In your resolver template for Class, create a list of those student ids
#foreach ($student in $class.students)
$util.qr($studentDetails.add({"id": "$student.id"}))
#end
and add it to your response object. Then, hook a resolver to the student field of StudentDetails and you will then be able to use $context.source.id for the individual student API call. Each id will be broken out of the array and be its own web request.
I opened a case with AWS Support and was told that the only way they know to do this is to create a Lambda Resolver that:
Takes an array of student IDs
Calls the students endpoint for each one
Returns an array of student details information
Instead of calling your student endpoint in the response, use a pipeline resolver and stitch the response from different steps using stash, context (prev.result/result), etc.