Say I have a table Person with attributes id and name. The GraphQL server is all setup by Postgraphile and working as I can query and create new entry. However, I could not update it. Scratching my head over and over again and I am still unable to find out the cause for this.
This is the mutation I tried that failed me every now and then.
mutation($id: Int!, $patch: PersonPatch!) {
updatePersonById(input: { id: $id, patch: $patch }) {
clientMutationId
}
}
The variables
{
id: 1,
patch: {"name": "newname"}
}
I was using Altair GraphQL client to submit the mutation request and the error message returned was "No values were updated in collection 'people' because no values were found."
The person of id = 1 does exist, confirmed by sending a query personById over to get his name. But I just couldn't update his name.
Edit #1
Below is the gql generated by Altair GraphQL Client
updatePersonById(
input: UpdatePersonByIdInput!
): UpdatePersonPayload
input UpdatePersonByIdInput {
clientMutationId: String
patch: PersonPatch!
id: Int!
}
input PersonPatch {
id: Int
name: String
}
Assuming you're using row-level security (RLS) it sounds like the row to be updated does not pass the required security policies for the currently authenticated user.
Here's a small example; you'll want to adjust it to fit your own permissions system
create table person (id serial primary key, name text);
alter table person enable row level security;
grant select, insert(name), update(name), delete on person to graphql;
create policy select_all on person for select using (true);
create policy insert_all on person for insert with check(true);
create policy update_self on person for update using (id = current_person_id());
create policy delete_self on person for delete using (id = current_person_id());
where
create function current_person_id() returns int as $$
select nullif(current_setting('jwt.claims.person_id', true), '')::int;
$$ language sql stable;
If you need more guidance, feel free to drop into the Graphile chat.
Related
I have an app where Auth is implemented using Cognito User Pools and API is a GraphQL API implemented using Amplify. In the Schema definitions, is there an easy way to limit the number of records a user can create. For example in the following schema...
type Product #model #auth(rules: [{ allow: owner }]) {
id: ID!
name: String!
description: String
}
I would like to limit the users to a maximum of 100 Products.
One way is via my front-end. When I detect that a user has reached 100 limit, I can just make the UI stop giving them the ability to add more. But if someone were to bypass the UI, they could create more than 100. Hence, I prefer to enforce this limit in the backend.
Is there a way to do this in the Schema definition, or elsewhere in AWS / DynamoDB ?
Thanks!
There isn't a straightforward way to do this that I'm aware of.
Below is how I would solve this.
Create a #key on Product on the owner property, so that you can query by owner.
Overwrite the CreateProduct mutation. In your custom resolver, before creating a new Product, query the Product table byOwner, using the owner id passed in, to count how many already exist.
Here is the documentation: https://docs.amplify.aws/cli/graphql-transformer/resolvers#add-a-custom-geolocation-search-resolver-that-targets-an-elasticsearch-domain-created-by-searchable
I think the easiest solution would be processing the API request in a lambda function that validates the request (product count < 100) before having the script write to the DB. Then you can null out the built-in mutations for the model to prevent unintended access.
Example Schema:
type Mutation {
addProduct(input: ProductAddInput): ProductAddOutput #function(name: "productLambda-${env}")
}
type Product
#model(queries: null, mutations: null, subscriptions: null) /* update these to what you need */
#auth(rules: [{ allow: owner }]) {
id: ID!
name: String!
description: String
}
In Lambda you can pull the username from the event.identity property and that should correlate to the owner field in the db. Since the AWS package is automatically loaded you should be looking at very fast script execution as long as your db indexes are set properly.
For the user product count, I see a couple of options:
A secondary index set up on the owner field so you don't do a ton of
scans
If you have a user table, you could add a field that counts
the products for each user and just update that table any time you
update the product table.
I've reached out on the AWS forums but am hoping to get some attention here with a broader audience. I'm looking for any guidance on the following question.
I'll post the question below:
Hello, thanks in advance for any help.
I'm new to Amplify/GraphQL and am struggling to get mutations working. Specifically, when I add a connection to a Model, they never appear in the mock api generator. If I write them out, they say "input doesn't exist". I've searched around and people seem to say "Create the sub item before the main item and then update the main item" but I don't want that. I have a large form that has several many-to-many relationships and they all need to be valid before I can save the main form. I don't see how I can create every sub item and then the main.
However, the items are listed in the available data for the response. In the example below, addresses, shareholders, boardofdirectors are all missing in the input.
None of the fields with '#connection' appear in the create api as inputs. I'll take any help/guidance I can get. I seem to not be understanding something core here.
Here's my Model:
type Company #model(queries: { get: "getEntity", list: "listEntities" }, subscriptions: null) {
id: ID!
name: String!
president: String
vicePresident: String
secretary: String
treasurer: String
shareholders: Shareholder #connection
boardOfDirectors: BoardMember #connection
addresses: [Address]! #connection
...
}
type Address #model{
id: ID!
line1: String!
line2: String
city: String!
postalCode: String!
state: State!
type: AddressType!
}
type BoardMember #model{
id: ID!
firstName: String!
lastName: String!
email: String!
}
type Shareholder #model {
id: ID!
firstName: String!
lastName: String!
numberOfShares: String!
user: User!
}
----A day later----
I have made some progress, but still lacking some understanding of what's going on.
I have updated the schema to be:
type Company #model(queries: { get: "getEntity", list: "listEntities" }, subscriptions: null) {
id: ID!
name: String!
president: String
vicePresident: String
secretary: String
treasurer: String
...
address: Address #connection
...
}
type Address #model{
id: ID!
line1: String!
line2: String
city: String!
postalCode: String!
state: State!
type: AddressType!
}
I removed the many-to-many relationship that I was attempting and now I'm limited to a company only having 1 address. I guess that's a future problem. However, now in the list of inputs a 'CompanyAddressId' is among the list of inputs. This would indicate that it expects me to save the address before the company. Address is just 1 part of the company and I don't want to save addresses if they aren't valid and some other part of the form fails and the user quits.
I don't get why I can't write out all the fields at once? Going along with the schema above, I'll also have shareholders, boardmembers, etc. So I have to create the list of boardmembers and shareholders before I can create the company? This seems backwards.
Again, any attempt to help me figure out what I'm missing would be appreciated.
Thanks
--Edit--
What I'm seeing in explorer
-- Edit 2--
Here is the newly generated operations based off your example. You'll see that Company takes an address Id now -- which we discussed prior. But it doesn't take anything about the shareholder. In order to write out a shareholder I have to use 'createShareholder' which needs a company Id, but the company hasn't been created yet. Thoroughly confused.
#engam I'm hoping you can help out the new questions. Thank you very much!
Here are some concepts that you can try out:
For the #model directive, try it out without renaming the queries. AWS Amplify gives great names for the automatically generated queries. For example to get a company it will be getCompany and for list it will be listCompanys. If you still want to give it new names, you may change this later.
For the #connection directive:
The #connection needs to be set on both tables of the connection. Also if you want many-to-many connections you need to add a third table that handles the connections. It is also usefull to give the connection a name, when you have many connections in your schema.
Only Scalar types that you have created in the schema, standard schalars like String, Int, Float and Boolean, and AWS specific schalars (like AWSDateTime) can be used as schalars in the schema. Check out this link:
https://docs.aws.amazon.com/appsync/latest/devguide/scalars.html
Here is an example for some of what I think you want to achieve:
type Company #model {
id: ID!
name: String
president: String
vicePresident: String
secretary: String
treasurer: String
shareholders: [Shareholder] #connection(name: "CompanySharholderConnection")
address: Address #connection(name: "CompanyAdressConnection") #one to many example
# you may add more connections/attributes ...
}
# table handling many-to-many connections between users and companies, called Shareholder.
type Shareholder #model {
id: ID!
company: Company #connection(name: "CompanySharholderConnection")
user: User #connection(name: "UserShareholderConnection")
numberOfShares: Int #or String
}
type User #model {
id: ID!
firstname: String
lastname: String
company: [Shareholder] #connection(name: "UserShareholderConnection")
#... add more attributes / connections here
}
# address table, one address may have many companies
type Address #model {
id: ID!
street: String
city: String
code: String
country: String
companies: [Company] #connection(name: "CompanyAdressConnection") #many-to-one connection
}
Each of this type...#model generates a new dynamoDB table. This example will make it possible for u to create multiple companies and multiple users. To add users as shareholders to a company, you only need to create a new item in the Shareholder table, by creating a new item with the ID in of the user from the User table and the ID of the company in the Company table + adding how many shares.
Edit
Be aware that when you generate a connection between two tables, the amplify cli (which uses cloudformation to do backend changes), will generate a new global index to one or more of the dynamodb tables, so that appsync can efficient give you data.
Limitations in dynamodb, makes it only possible to generate one index (#connection) at a time, when you edit a table. I think you can do more at a time when you create a new table (#model). So when you edit one or more of your tables, only remove or add one connection at a time, between each amplify push / amplify publish. Or else cloudformation will fail when you push the changes. And that can be a mess to clean up. I have had to, multiple times, delete a whole environment because of this, luckily not in a production environment.
Update
(I also updated the Address table in the schema with som values);
To connect a new address when you are creating a new company, you will first have to create a new address item in the Address table in dynamoDb.
The mutation for this generated from appsync is probably named createAddress() and takes in a createAddressInput.
After you create the address you will recieve back the whole newly createdItem, including the automatically created ID (if you did not add one yourself).
Now you may save the new company that you are creating. One of the attributes the createCompany mutation takes is the id of the address that you created, probably named as companyAddressId. Store the address Id here. When you then retrieves your company with either getCompany or listCompanys you will get the address of your company.
Javascript example:
const createCompany = async (address, company) => {
// api is name of the service with the mutations and queries
try {
const newaddress = await this.api.createAddress({street: address.street, city: address.city, country: address.country});
const newcompany = await this.api.createCompany({
name: company.name,
president: company.president,
...
companyAddressId: newaddress.id
})
} catch(error) {
throw error
}
}
// and to retrieve the company including the address, you have to update your graphql statement for your query:
const statement = `query ListCompanys($filter: ModelPartiFilterInput, $limit: Int, $nextToken: String) {
listCompanys(filter: $filter, limit: $limit, nextToken: $nextToken) {
__typename
id
name
president
...
address {
__typename
id
street
city
code
country
}
}
}
`
AppSync will now retrive all your company (dependent on your filter and limit) and the addresses of those companies you have connected an address to.
Edit 2
Each type with #model is a referance to a dynamoDb table in aws. So when you are creating a one-to-many relationship between two tables, when both items are new you first have to create the the 'many' in the one-to-many realationships. In the dynamoDb Company tables when an address can have many companies, and one company only can have one address, you have to store the id (dynamoDB primary key) for the address on the company. You could of course generate the address id in frontend, and using that for the id of the address and the same for the addressCompanyId in for the company and use await Promise.all([createAddress(...),createCompany(...)) but then if one fails the other one will be created (but generally appsync api's are very stable, so if the data you send is correct it won't fail).
Another solution, if you generally don't wont to have to create/update multiple items in multiple tables, you could store the address directly in the company item.
type Company #model {
name: String
...
address: Address # or [Address] if you want more than one Address on the company
}
type Address {
street: String
postcode: String
city: string
}
Then the Address type will be part of the same item in the same table in dynamoDb. But you will loose the ability to do queries on addresses (or shareholders) to look up a address and see which companies are located there (or simulary look up a person and see which companies that person has a share in). Generally i don't like this method because it locks your application to one specific thing and it's harder to create new features later on.
As far as I'm aware of, it is not possible to create multiple items in multiple dynamoDb tables in one graphql (Amplify/AppSync) mutation. So async await with Promise.all() and you manually generate the id attributes frontendside before creating the items might be your best option.
I'm experimenting with AppSync + DynamoDB. I want to have the following types in my GraphQL Schema:
type User {
user_id: String!
}
type Box {
name: String!
user: User!
}
How can I create, in DynamoDB, a table storing items pointing to another table (In my case, I want the field user of the table BoxTable to be a reference to a user in the table UserTable?
How can I, in AppSync, define the above schema? When I set user: User!, I get the error Expected User! to be a GraphQL input type.?
As per my understanding of your question, these are my answers.
How can I create, in DynamoDB, a table storing items pointing to another table
DynamoDB is not a relational database and does not offer foreign keys or table joins. Therefore, to achieve what you have mentioned in your post, you would still require two calls to DynamoDB to get all the information for the Box i.e. first get the Box item from BoxTable and then get user from UserTable based on user_id. If your use case is such that you get user first, then you can get the Box using filter by user_id.
Now to the second part of your post,
How can I, in AppSync, define the above schema?
With DynamoDB unit resolvers, you can query against a single table (outside of DynamoDB Batch Operations but those are reserved for bulk use cases).
One way of doing this is by defining your schema that should look something like this;
type User {
user_id: String!
}
type Box {
name: String!
user: User!
}
input BoxInput {
name: String!
user: UserInput!
}
input UserInput {
user_id: String!
}
type Mutation {
createBox(input: BoxInput): Box
}
type Query {
getBox(input: BoxInput): Box
}
And this is how you can run query and mutation;
mutation createBox {
createBox(input: {
name: "abc"
user: { user_id: "1234-abcd-5678-efgh"}
}){
name
user { user_id }
}
}
query getBox {
getBox(input: {
name: "abc"
user: { user_id: "1234-abcd-5678-efgh"}
}){
name
user { user_id }
}
}
So, beware of the above query and mutation. These will show user as null unless you attach a separate resolver with you user type within your Box type. For example:
Query that returns Box --> Resolver
type Box {
name
user --> Attach resolver to get user_id from your UserTable
}
Other way is to utilize the pipeline resolvers in which you can create multiple functions, each of which can use the results of the previous function and query a database. These functions run in an order you specify. For example:
Function to get Box from BoxTable.
Function to get user from UserTable by using user_id from ctx.prev.result.
And finally consolidating above two result into one JSON object depending upon Box type in your schema.
My data model includes the following nodes:
model User {
id Int #id #default(autoincrement())
name String
posts Post[]
}
model Post {
id Int #id #default(autoincrement())
body String
user User #relation(fields: [userId], references: [id])
userId Int
}
I tried to delete one User like this:
async function deleteUser(_, args) {
const { id } = args
return prisma.user.delete({
where: { id: id }
})
}
But it gives an error: ... The change you are trying to make would violate the required relation UserToPost between the User and Post models.
Then how to delete one user? Even I tried to delete the post first then the user but again same error happened.
This has now been released as a preview feature behind a preview feature flag. You can read about it in the release notes for 2.26.0: https://github.com/prisma/prisma/releases/tag/2.26.0
The preview feature can be enabled by setting the preview feature flag referentialActions in the generator block of Prisma Client in your Prisma schema file:
generator client {
provider = "prisma-client-js"
previewFeatures = ["referentialActions"]
}
Looks like your table does not support CASCADE deletions and prisma does not automatically add it for you. You will have to manually update the definition of your table either while migration or after the fact.
so basically, alter your table definition.
ALTER TABLE public.Post
DROP CONSTRAINT Post_user_fkey,
ADD CONSTRAINT Post_user_fkey
FOREIGN KEY (user)
REFERENCES public.User(user)
ON DELETE CASCADE
ON UPDATE CASCADE;
Refer to these docs on options of configuring relational queries.
Imagine the condition that I have a query called "users" that returns all the users and these users can be associated with one or more companies, so I have a type UserCompanies (I need it because it saves some more information beyond the relation). I'm using Prisma and I need to force a filter that returns only users that are of the same company as the requester.
I get the information of the company from JWT and need to inject this to the query before sending it to Prisma.
So, query should be like that:
query allUsers {
users {
name
id
status
email
userCompanies{
id
role
}
}
}
and on server side, I should transform it to: (user where is ok, just changing args)
query allUsers {
users(where: {
userCompanies_some: {
companyId: "companyId-from-jwt"
}
}) {
name
id
status
email
userCompanies(where: {
companyId: "companyId-from-jwt"
}){
id
role
}
}
}
I'm seeing a few resolutions to this, but I don't know if it is the best way:
1 - Using addFragmentToInfo, does the job to put conditions on the query, but if the query has a usercompanies already set, it gives me a conflict. Otherwise, it works fine.
2 - I can use an alias for the query, but after DB result I will need to edit all the results in array to overwrite the result.
3 - don't use info on Prisma and filter in js.
4 - Edit info(4th parameter) of type GraphqlResolveInfo