Aws AppSync Query erring out while using a resolver - graphql

Im new to AWS AppSync however its been pretty easy to learn and understand.
Im trying to create a resolver that when the user runs getChore(id: "") it will return all the chore information. Which its successfully doing, the problem is within the chore there are two fields: createdBy & assignedTo which are linked to a user type.
type Chore {
id: ID!
title: String
desc: String
status: String
reward: Float
retryDeduction: Float
required: Boolean
createdDate: AWSDateTime
date: AWSDateTime
interval: String
assignedTo: User
createdBy: User
}
type User {
id: ID!
age: Int
f_name: String
l_name: String
type: Int
admin: Boolean
family: Family
}
within aws appsync in trying to attach a resolver to assignedTo: User and createdBy: User so my query will look like:
query getChore {
getChore(id: "36d597c8-2c7e-4f63-93ee-38e5aa8f1d5b") {
id
...
...
assignedTo {
id
f_name
l_name
}
createdBy {
id
f_name
l_name
}
}
}
however when i fire off this query im getting an error:
The provided key element does not match the schema (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException;
which i have researched and cant seem to find the correct soltuion.
The resolver im using is:
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.args.id),
}
}
return:
$util.toJson($ctx.result)

When you get the The provided key element does not match the schema error, it's because your request mapping template key doesn't match the primary key in DynamoDB. You can enable CloudWatch Logs in your Application settings to see exactly what was sent to DynamoDB.
I'm not able to know what's wrong with your template because your sample lacks some information, if you can answers the questions pertaining to your application:
- Where are the users stored? Are they stored in their own DDB table separate from the chores, and is the hash key on the users table id as well?
- In the chores table how do you know which user your chore is assignedTo or createdBy? Is there a user id stored on the chore DDB item?
- Is the request mapping template you posted corresponding to the resolver attached to Chore.assignedTo? If yes, using $ctx.args.id will actually do a GetItem based on the chore id not the user it's assigned to.
Finally, I reproduced your application and I was able to make it work with a few changes.
Prerequisites:
I have a chores and a users DynamoDB table with both having id as hash key. These two tables are mapped as datasources in AppSync.
I have one chore in the chores tables that looks like
{
"assignedTo": "1",
"createdBy": "2",
"id": "36d597c8-2c7e-4f63-93ee-38e5aa8f1d5b",
"title": "Chore1"
}
and two users in the users table:
{
"f_name": "Alice",
"id": "2",
"l_name": "Wonderland"
}
and
{
"f_name": "John",
"id": "1",
"l_name": "McCain"
}
I used your GraphQL schema
Resolvers
Resolver on Query.getChore pointing to the chores table:
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.args.id),
}
}
Resolver on Chore.assignedTo pointing to the users table (note the $ctx.source.assignedTo instead of $ctx.args)
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.source.assignedTo),
}
}
Similarly, resolver on Chore.createdBy pointing to the users table:
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.source.createdBy),
}
}
All resolvers response mapping template use the pass-through.
Running the query
Finally, when running your query:
query getChore {
getChore(id: "36d597c8-2c7e-4f63-93ee-38e5aa8f1d5b") {
id
assignedTo {
id
f_name
l_name
}
createdBy {
id
f_name
l_name
}
}
}
I get the following results:
{
"data": {
"getChore": {
"id": "36d597c8-2c7e-4f63-93ee-38e5aa8f1d5b",
"assignedTo": {
"id": "1",
"f_name": "John",
"l_name": "McCain"
},
"createdBy": {
"id": "2",
"f_name": "Alice",
"l_name": "Wonderland"
}
}
}
}
Hope it helps!

Related

GraphQL AppSync DynamoDB only update one field in mutation

So I am working on an app with AWS amplify. I am using a single table design and I am trying to run a mutation where I only update the profile field of the UserV0 in the single table design. I am trying to only update the profile(s3key) but when I run my mutations it deletes the rest of the contents of UserV0.
Graph QL Schema
type SingleTable #model {
pk: String! #primaryKey(sortKeyFields: ["sk"])
sk: String!
user: UserV0
post: PostV0
}
type UserV0 {
name: String
username: String
email: String
profile: String
}
type PostV0 {
...
}
query getUserInfo {
getSingleTable(pk: "TEST", sk: "TEST") {
user {
username
name
profile
email
}
}
}
mutation createTable {
createSingleTable(input: {pk: "TEST", sk: "TEST", user: {email: "email#email.com", name: "testname", profile: "testPath", username: "testusername"}}) {
updatedAt
}
}
mutation updateTable {
updateSingleTable(input: {pk: "TEST", sk: "TEST", user: {profile: "TESTING", username: "TESTING123"}}) {
createdAt
}
}
If I run the update mutation above, then the entire user is reset and when I check it in my DynamoDB field the name and email fields are all lost. How can I make it so that when I run the mutation, I only update the profile field and leave the other fields without deleting them. Thanks in advance.
Edit: I put in all of the queries and mutations that I am running in AppSync. I run createTable and then getUserInfo and it returns this as it should.
{
"data": {
"getSingleTable": {
"user": {
"username": "testusername",
"name": "testname",
"profile": "testPath",
"email": "email#email.com"
}
}
}
}
But after I run the updateTable and then getUserInfo it returns this.
{
"data": {
"getSingleTable": {
"user": {
"username": "TESTING123",
"name": null,
"profile": "TESTING",
"email": null
}
}
}
}
As you can see the name and email fields are reset, set to null and removed from the DynamoDB database. I am pretty sure it is because it sees the user object as a new input. But how do I get it just recognize that I only want to update certain fields in userV0 and not the entire thing.
Make sure your function updateSingleTable uses the UpdateItem operation:
"operation" : "UpdateItem"
If you're using PutItem which I assume you are, it performs and overwrite and thus removing existing data.
https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-dynamodb.html
Under the hood I believe the DynamoDB client is DynamoDB Mapper. As a result, it will delete values if set to null. To avoid this, you must ensure you do not set values to null, instead omit any values not being used in the request.

Can I restrict a mutation to update an object based on the object itself?

I have the following objects in my database:
[
{
"id": 1,
"name": "foo",
"objType": "A"
},
{
"id": 2,
"name": "bar",
"objType": "B"
}
]
And the following users:
[
{
"id": 3,
"name": "User A",
"role": "admin"
},
{
"id": 4,
"name": "User B",
"role": "client"
}
]
And I have a schema like:
enum ObjTypeEnum {
A
B
}
type MyObj {
id: Int
name: String
objType: ObjTypeEnum
}
type Mutation {
updateObj(id: Int!, name: String): MyObj
}
The user A can update any obj that he wants because he is an admin. However, the user B can only update an object only if this object is of type B.
That means:
If the user B tries to update the object 2, using the mutation updateObj(2, "new name"), this should be totally ok. However, if he tries to update the object 1, updateObj(1, "new name"), now this should return an error for this user.
My naïve solution for this is get the object in the resolver, check its type and, if is ok for the current user, then proceed with the update, otherwise throw an error. But I have the feeling I'm in the wrong direction and not using graphql properly...
Is it possible to do it using directives or something more generic, since the key that using to validate the update is an enum?

Spring Data Mongo: How to filtering documents by optional attributes?

A few Documents that already stored in Mongo db:
{
"companyName": "Google",
"departmentName": "Sales"
},
{
"companyName": "Google",
"departmentName": "HR"
},
{
"companyName": "Amazon",
"departmentName": "Marketing"
}
I need to implement a method that will receive 2 attributes: companyName, departmentName (one of them can be optional) and will return a list of found documents, for example:
when departmentName is null and companyName is Google method returns 2 documents:
{
"companyName": "Google",
"departmentName": "Sales"
},
{
"companyName": "Google",
"departmentName": "HR"
}
when companyName is null and departmentName is Marketing only one Document returned:
{
"companyName": "Amazon",
"departmentName": "Marketing"
}
I tried to implement it in various ways, but no one fits my needs:
public interface CompanyRepository extends ReactiveMongoRepository<Company, String> {
Flux<Company> findByCompanyNameAndDepartmentName(String companyName, String departmentName);
}
returns 0 results, when companyName or departmentName is null.
Option 2:
Company company = Company.builder()
.companyName(null)
.departmentName("Marketing")
.build();
repository.findAll(Example.of(company))
.subscribe(System.out::println);
is also prints 0 results, but I expect to see one document.
so, please advice for to implement proper search?
Could you try with the below repository interface:
public interface CompanyRepository extends ReactiveMongoRepository<Company, String> {
Flux<Company> findByCompanyNameOrDepartmentName(String companyName, String departmentName);
}
Change is JPA method name change from findByCompanyNameAndDepartmentName to findByCompanyNameOrDepartmentName. Since you need the result if any one of the field matches your input.
Reference: https://docs.spring.io/spring-data/jpa/docs/current/reference/html/#jpa.query-methods.query-creation

AWS AppSync: pass arguments from parent resolver to children

In AWS AppSync, arguments send on the main query don't seem to be forwarded to all children resolvers.
type Query {
article(id: String!, consistentRead: Boolean): Article
book(id: String!, consistentRead: Boolean): Book
}
type Article {
title: String!
id: String!
}
type Book {
articleIds: [String]!
articles: [Article]!
id: String!
}
when I call:
query GetBook {
book(id: 123, consistentRead: true) {
articles {
title
}
}
}
the first query to get the book receives the consistentRead param in $context.arguments, but the subsequent query to retrieve the article does not. ($context.arguments is empty)
I also tried articles(consistentRead: Boolean): [Article]! inside book but no luck.
Does anyone know if it's possible in AppSync to pass arguments to all queries part of the same request?
It is possible to pass arguments from parent to child via the response. Let me explain ...
AppSync has several containers inside $context:
arguments
stash
source
arguments and stash are always cleared before invoking a child resolver as evident from these Cloudwatch logs:
At the very end of the parent execution - arguments and stash data are present.
{
"errors": [],
"mappingTemplateType": "After Mapping",
"path": "[getLatestDeviceState]",
"resolverArn": "arn:aws:appsync:us-east-1:xxx:apis/yyy/types/Query/fields/getLatestDeviceState",
"context": {
"arguments": {
"device": "ddddd"
},
"prev": {
"result": {
"items": [
{
"version": "849",
"device": "ddddd",
"timestamp": "2019-01-29T12:18:34.504+13:00"
}
]
}
},
"stash": {"testKey": "testValue"},
"outErrors": []
},
"fieldInError": false
}
and then at the very beginning of the child resolver - arguments and stash are always blank.
{
"errors": [],
"mappingTemplateType": "Before Mapping",
"path": "[getLatestDeviceState, media]",
"resolverArn": "arn:aws:appsync:us-east-1:yyy:apis/xxx/types/DeviceStatePRODConnection/fields/media",
"context": {
"arguments": {},
"source": {
"items": [
{
"version": "849",
"device": "ddddd",
"timestamp": "2019-01-29T12:18:34.504+13:00"
}
]
},
"stash": {},
"outErrors": []
},
"fieldInError": false
}
Workaround 1 - get the argument from the previous result.
In the example above device is always present in the response of the parent resolver, so I inserted
#set($device = $util.defaultIfNullOrBlank($ctx.args.device, $ctx.source.items[0].device))
into the request mapping template of the child resolver. It will try to get the ID it needs from the arguments and then fall back onto the previous result.
Workaround 2 - add the argument to the parent response
Modify your parent resolver response template to include the arguments:
{
"items": $utils.toJson($context.result.items),
"device": "${ctx.args.device}"
}
and then retrieve it in the request mapping template of the child the same way as in the first workaround.
To achieve availability across all related resolvers (nested or those collection-entity related) for me was fine Workaround 2 (tnx Max for such a good answer) but just for child resolvers.
In another case when I needed to resolve entities from collection query (contains other fields besides entity) property added to response mapping template wasn't available anymore.
So my solution was to set it to request headers:
##Set parent query profile parameter to headers to achieve availability accross related resolvers.
#set( $headers = $context.request.headers )
$util.qr($headers.put("profile", $util.defaultIfNullOrBlank($context.args.profile, "default")))
And read this value from your nested/other request mapping templates:
#set($profile = $ctx.request.headers.profile)
This makes the parent argument available wherever I need it between related resolvers. In your case, it would be 'device' and some default value or without that part if not needed.
Add this to BookQuery Response Mapping Template
#set( $book = $ctx.result )
#set($Articles = []);
#foreach($article in $book.articles)
#set( $newArticle = $article )
$util.qr($newArticle.put("bookID", $book.id))
$util.qr($Articles.add($newArticle))
#end
$util.qr($book.put("articles", $Articles))
$util.toJson($book)
Now, every article will have bookID
You should be able to find consistentRead in $context.info.variables ($context.info.variables.consistentRead):
https://docs.aws.amazon.com/appsync/latest/devguide/resolver-context-reference.html#aws-appsync-resolver-context-reference-info
You don't need to pass arguments to sub-query. Base on your schema and use-case, I think you can adjust your schema like below to have a relationship between Author and Book
type Author {
# parent's id
bookID: ID!
# author id
id: ID!
name: String!
}
type Book {
id: ID!
title: String!
author: [Author]!
}
type Mutation {
insertAuthor(bookID: ID!, id: ID!, name: String!): Author
insertBook(id: ID!, title: String!): Book
}
type Query {
getBook(id: ID!): Book
}
- Create table Author with Author.bookID as a primary key and Author.id as a sort key
- Create table Book with Book.id as a primary key
Then, you have to attach a resolver for Book.author
And here is a resolver for insertAuthor mutation
{
"version" : "2017-02-28",
"operation" : "PutItem",
"key" : {
"bookID" : $util.dynamodb.toDynamoDBJson($ctx.args.bookID),
"id" : $util.dynamodb.toDynamoDBJson($ctx.args.id)
},
"attributeValues" : {
"name" : $util.dynamodb.toDynamoDBJson($ctx.args.name)
}
}
And when you do query getBook you will get a list of author that has the same book id as below
Simply in the child use $ctx.source.id where id is the parameter you need reference from the parent.

Empty data object returns in <Query> after mutation is applied for the same

Github Issue Posted Here
"apollo-boost": "^0.1.13",
"apollo-link-context": "^1.0.8",
"graphql": "^0.13.2",
"graphql-tag": "^2.9.2",
"react-apollo": "^2.1.11",
Current Code Structure
<div>
<Query
query={FETCH_CATEGORIES_AUTOCOMPLETE}
variables={{ ...filters }}
fetchPolicy="no-cache"
>
{({ loading, error, data }) => {
console.log('category', loading, error, data); // _______Label_( * )_______
if (error) return 'Error fetching products';
const { categories } = data;
return (
<React.Fragment>
{categories && (
<ReactSelectAsync
{...this.props.attributes}
options={categories.data}
handleFilterChange={this.props.handleCategoryFilterChange}
loading={loading}
labelKey="appendName"
/>
)}
</React.Fragment>
);
}}
</Query>
<Mutation mutation={CREATE_CATEGORY}>
{createCategory => (
<div>
// category create form
</div>
)}
</Mutation>
</div>
Behavior
Initially, the query fetches data and I get list of categories inside data given in Label_( * ) .
After entering form details, the submission occurs successfully.
Issue: Then, suddenly, in the Label_( * ), the data object is empty.
How can I solve this?
Edit
These are the response:
Categories GET
{
"data": {
"categories": {
"page": 1,
"rows": 2,
"rowCount": 20,
"pages": 10,
"data": [
{
"id": "1",
"appendName": "Category A",
"__typename": "CategoryGETtype"
},
{
"id": "2",
"appendName": "Category B",
"__typename": "CategoryGETtype"
}
],
"__typename": "CategoryPageType"
}
}
}
Category Create
{
"data": {
"createCategory": {
"msg": "success",
"status": 200,
"category": {
"id": "21",
"name": "Category New",
"parent": null,
"__typename": "CategoryGETtype"
},
"__typename": "createCategory"
}
}
}
(I came across this question while facing a similar issue, which I have now solved)
In Apollo, when a mutation returns data that is used in a different query, then the results of that query will be updated. e.g. this query that returns all the todos
query {
todos {
id
description
status
}
}
Then if we mark a todo as completed with a mutation
mutation CompleteTodo {
markCompleted(id: 3) {
todo {
id
status
}
}
}
And the result is
{
todo: {
id: 3,
status: "completed"
__typename: "Todo"
}
}
Then the todo item with id 1 will have its status updated. The issue comes when the mutation tells the query it's stale, but doesn't provide enough information for the query to be updated. e.g.
query {
todos {
id
description
status
owner {
id
name
}
}
}
and
mutation CompleteTodo {
assignToUser(todoId: 3, userId: 12) {
todo {
id
owner {
id
}
}
}
}
Example result:
{
todo: {
id: 3,
owner: {
id: 12,
__typename: "User"
},
__typename: "Todo"
}
}
Imagine your app previously knew nothing about User:12. Here's what happens
The cache for Todo:3 knows that it now has owner User:12
The cache for User:12 contains just {id:12}, since the mutation didn't return the name field
The query cannot give accurate information for the name field for the owner without refetching (which doesn't happen by default). It updates to return data: {}
Possible solutions
Change the return query of the mutation to include all the fields that the query needs.
Trigger a refetch of the query after the mutation (via refetchQueries), or a different query that includes everything the cache needs
manual update of the cache
Of those, the first is probably the easiest and the fastest. There's some discussion of this in the apollo docs, but I don't see anything that describes the specific behavior of the data going empty.
https://www.apollographql.com/docs/angular/features/cache-updates.html
Tip of the hat to #clément-prévost for his comment that provided a clue:
Queries and mutations that fetch the same entity must query the same
fields. This is a way to avoid local cache issues.
After changing fetchPolicy to cache-and-network. It solved the issue. Link to fetchPolicy Documentation
Doing so, I also had to perform refetch query.

Resources