In graphql mutation gives error in golang client - go

Actual Graphql query works fine
mutation createQuestion($data:QuestionInput!) {
createQuestion(data: $data) {
data {
id
attributes {
title
description
question_id
}
}
}
}
When client side(below code in GOLANG) ran this query it's gives an issue
CreateQuestion struct {
Data struct {
ID graphql.ID
Attributes struct {
Question_id graphql.String
Title graphql.String
Description graphql.String
}
}
}
Passing Data like this
variables := map[string]interface{} {
"data": map[string]interface{} {
"title": "My testFirst FbPost",
"description": "dfsdf is the body of my first post.",
"question_id": "35005",
},
}
Gives an Error like below
Failed to run mutation: found
non-200 OK status code: 400 Bad Request body:
"message":"Syntax Error: Expected Name",
"code":"GRAPHQL_PARSE_FAILED",
"stacktrace":"GraphQLError"
Trying to Posting data to Server(Strapi CMS) using Golang client Library.

type QuestionInput map[string]interface{}
variables := map[string]interface{}{
"data": QuestionInput {
"title": "My testFirst FbPost",
"description": "dfsdf is the body of my first post.",
"question_id": "35005",
},
}

Related

How can I handle a batch graphql request in gqlgen?

I could use some help figuring out how to handle a batch request in gqlgen. The request is coming from an apollo client using apollo's query batching, so the request body is a json array like so:
[
{
"operationName":"UpdateDocument",
"variables":{
"input":{
"document_id":"123"
}
},
"query":"mutation UpdateDocument($input: UpdateDocumentInput!) {
updateDocument(input: $input) {
document {
id
__typename
}
__typename
}
}"
},
{
"operationName":"UpdateDocument",
"variables":{
"input":{
"document_id":"124"
}
},
"query":"mutation UpdateDocument($input: UpdateDocumentInput!) {
updateDocument(input: $input) {
document {
id
__typename
}
__typename
}
}"
}
]
and in the gqlgen side, I have a resolver that handles a single UpdateDocument query.
When I am making the batch request I get an error: "json body could not be decoded: json: cannot unmarshal array into Go value of type graphql.RawParams"

Error Cannot return null for non-nullable type: 'String' within parent MyModelType' (/createMyModelType/id)

I am trying to trigger a mutation in the aws console. I have linked my resolver function to a None type data source.
However, when I define my mutation with an input type as a parameter, the error " Error Cannot return null for non-nullable type: 'String' within parent MyModelType' (/createMyModelType/id)." occurs. Everything is fine though if I replace the input type with key word arguments.
I am certain it has to do with my resolver mapping template.
Just if you're wondering why I am using a None type, I want to be able to trigger a subscription without making real database changes or mutations.
I am not sure how to make it work with input types. Here is my code for the template:
{
"version": "2017-02-28",
"payload": $util.toJson($context.args)
}
My Schema:
input CreateMyModelType5Input {
title: String
}
type Mutation {
createMyModelType5(input: CreateMyModelType5Input!): MyModelType5
}
type MyModelType5 {
id: ID!
title: String
}
type Subscription {
onCreateMyModelType5(id: ID, title: String): MyModelType5
#aws_subscribe(mutations: ["createMyModelType5"])
}
Query I am trying to run:
mutation createMyModelType($createmymodeltypeinput: CreateMyModelTypeInput!) {
createMyModelType(input: $createmymodeltypeinput) {
id
title
}
}
Query Variables for the mutation query
{
"createmymodeltype5input": {
"title": "Hello, world!"
}
}
So I have been working on passing my arguments in the graphql mutation and using the input type seemed the only straight forward way around.
However, I have been able to do it with this way:
mutation = """mutation CreateMyModelType($id: String!, $title: String!){
createMyModelType(id: $id, title: $title){
id
title
}
}
"""
input_params = {
"id": "34",
"title": "2009-04-12"
}
response = app_sync.createMyModelType(mutation, input_params)
this can be a good guide

Aws AppSync Query erring out while using a resolver

Im new to AWS AppSync however its been pretty easy to learn and understand.
Im trying to create a resolver that when the user runs getChore(id: "") it will return all the chore information. Which its successfully doing, the problem is within the chore there are two fields: createdBy & assignedTo which are linked to a user type.
type Chore {
id: ID!
title: String
desc: String
status: String
reward: Float
retryDeduction: Float
required: Boolean
createdDate: AWSDateTime
date: AWSDateTime
interval: String
assignedTo: User
createdBy: User
}
type User {
id: ID!
age: Int
f_name: String
l_name: String
type: Int
admin: Boolean
family: Family
}
within aws appsync in trying to attach a resolver to assignedTo: User and createdBy: User so my query will look like:
query getChore {
getChore(id: "36d597c8-2c7e-4f63-93ee-38e5aa8f1d5b") {
id
...
...
assignedTo {
id
f_name
l_name
}
createdBy {
id
f_name
l_name
}
}
}
however when i fire off this query im getting an error:
The provided key element does not match the schema (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException;
which i have researched and cant seem to find the correct soltuion.
The resolver im using is:
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.args.id),
}
}
return:
$util.toJson($ctx.result)
When you get the The provided key element does not match the schema error, it's because your request mapping template key doesn't match the primary key in DynamoDB. You can enable CloudWatch Logs in your Application settings to see exactly what was sent to DynamoDB.
I'm not able to know what's wrong with your template because your sample lacks some information, if you can answers the questions pertaining to your application:
- Where are the users stored? Are they stored in their own DDB table separate from the chores, and is the hash key on the users table id as well?
- In the chores table how do you know which user your chore is assignedTo or createdBy? Is there a user id stored on the chore DDB item?
- Is the request mapping template you posted corresponding to the resolver attached to Chore.assignedTo? If yes, using $ctx.args.id will actually do a GetItem based on the chore id not the user it's assigned to.
Finally, I reproduced your application and I was able to make it work with a few changes.
Prerequisites:
I have a chores and a users DynamoDB table with both having id as hash key. These two tables are mapped as datasources in AppSync.
I have one chore in the chores tables that looks like
{
"assignedTo": "1",
"createdBy": "2",
"id": "36d597c8-2c7e-4f63-93ee-38e5aa8f1d5b",
"title": "Chore1"
}
and two users in the users table:
{
"f_name": "Alice",
"id": "2",
"l_name": "Wonderland"
}
and
{
"f_name": "John",
"id": "1",
"l_name": "McCain"
}
I used your GraphQL schema
Resolvers
Resolver on Query.getChore pointing to the chores table:
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.args.id),
}
}
Resolver on Chore.assignedTo pointing to the users table (note the $ctx.source.assignedTo instead of $ctx.args)
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.source.assignedTo),
}
}
Similarly, resolver on Chore.createdBy pointing to the users table:
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.source.createdBy),
}
}
All resolvers response mapping template use the pass-through.
Running the query
Finally, when running your query:
query getChore {
getChore(id: "36d597c8-2c7e-4f63-93ee-38e5aa8f1d5b") {
id
assignedTo {
id
f_name
l_name
}
createdBy {
id
f_name
l_name
}
}
}
I get the following results:
{
"data": {
"getChore": {
"id": "36d597c8-2c7e-4f63-93ee-38e5aa8f1d5b",
"assignedTo": {
"id": "1",
"f_name": "John",
"l_name": "McCain"
},
"createdBy": {
"id": "2",
"f_name": "Alice",
"l_name": "Wonderland"
}
}
}
}
Hope it helps!

How to properly format data with AppSync and DynamoDB when Lambda is in between

Receiving data with AppSync directly from DynamoDB seems working for my case, but when I try to put a lambda function in between, I receive errors that says "Can't resolve value (/issueNewMasterCard/masterCards) : type mismatch error, expected type LIST"
Looking to the AppSync cloudwatch response mapping output, I get this:
"context": {
"arguments": {
"userId": "18e946df-d3de-49a8-98b3-8b6d74dfd652"
},
"result": {
"Item": {
"masterCards": {
"L": [
{
"M": {
"cardId": {
"S": "95d67f80-b486-11e8-ba85-c3623f6847af"
},
"cardImage": {
"S": "https://s3.eu-central-1.amazonaws.com/logo.png"
},
"cardWallet": {
"S": "0xFDB17d12057b6Fe8c8c434653456435634565"
},...............
here is how I configured my response mapping template:
$utils.toJson($context.result.Item)
I'm doing this mutation:
mutation IssueNewMasterCard {
issueNewMasterCard(userId:"18e946df-d3de-49a8-98b3-8b6d74dfd652"){
masterCards {
cardId
}
}
}
and this is my schema :
type User {
userId: ID!
masterCards: [MasterCard]
}
type MasterCard {
cardId: String
}
type Mutation {
issueNewMasterCard(userId: ID!): User
}
The Lambda function:
exports.handler = (event, context, callback) => {
const userId = event.arguments.userId;
const userParam = {
Key: {
"userId":{S:userId}
},
TableName:"FidelityCardsUsers"
}
dynamoDB.getItem(userParam, function(err, data) {
if (err) {
console.log('error from DynamDB: ',err)
callback(err);
} else {
console.log('mastercards: ',JSON.stringify(data));
callback(null,data)
}
})
I think the problem is that the getItem you use when you use the DynamoDB datasource is not the same as the the DynamoDB.getItem function in the aws-sdk.
Specifically it seems like the datasource version returns an already marshalled response (that is, instead of something: { L: [ list of things ] } it just returns something: [ list of things]).
This is important, because it means that $utils.toJson($context.result.Item) in your current setup is returning { masterCards: { L: [ ... which is why you are seeing the type error- masterCards in this case is an object with a key L, rather than an array/list.
To solve this in the resolver, you can use the $util.dynamodb.toDynamoDBJson(Object) macro (https://docs.aws.amazon.com/appsync/latest/devguide/resolver-util-reference.html#dynamodb-helpers-in-util-dynamodb). i.e. your resolver should be:
$util.dynamodb.toDynamoDBJson($context.result.Item)
Alternatively you might want to look at the AWS.DynamoDB.DocumentClient class (https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB/DocumentClient.html). This includes versions of getItem, etc. that automatically marshal and unmarshall the proprietary DynamoDB typing back into native JSON. (Frankly I find this much nicer to work with and use it all the time).
In that case you can keep your old resolver, because you'll be returning an object where masterCards is just a JSON array.

Empty data object returns in <Query> after mutation is applied for the same

Github Issue Posted Here
"apollo-boost": "^0.1.13",
"apollo-link-context": "^1.0.8",
"graphql": "^0.13.2",
"graphql-tag": "^2.9.2",
"react-apollo": "^2.1.11",
Current Code Structure
<div>
<Query
query={FETCH_CATEGORIES_AUTOCOMPLETE}
variables={{ ...filters }}
fetchPolicy="no-cache"
>
{({ loading, error, data }) => {
console.log('category', loading, error, data); // _______Label_( * )_______
if (error) return 'Error fetching products';
const { categories } = data;
return (
<React.Fragment>
{categories && (
<ReactSelectAsync
{...this.props.attributes}
options={categories.data}
handleFilterChange={this.props.handleCategoryFilterChange}
loading={loading}
labelKey="appendName"
/>
)}
</React.Fragment>
);
}}
</Query>
<Mutation mutation={CREATE_CATEGORY}>
{createCategory => (
<div>
// category create form
</div>
)}
</Mutation>
</div>
Behavior
Initially, the query fetches data and I get list of categories inside data given in Label_( * ) .
After entering form details, the submission occurs successfully.
Issue: Then, suddenly, in the Label_( * ), the data object is empty.
How can I solve this?
Edit
These are the response:
Categories GET
{
"data": {
"categories": {
"page": 1,
"rows": 2,
"rowCount": 20,
"pages": 10,
"data": [
{
"id": "1",
"appendName": "Category A",
"__typename": "CategoryGETtype"
},
{
"id": "2",
"appendName": "Category B",
"__typename": "CategoryGETtype"
}
],
"__typename": "CategoryPageType"
}
}
}
Category Create
{
"data": {
"createCategory": {
"msg": "success",
"status": 200,
"category": {
"id": "21",
"name": "Category New",
"parent": null,
"__typename": "CategoryGETtype"
},
"__typename": "createCategory"
}
}
}
(I came across this question while facing a similar issue, which I have now solved)
In Apollo, when a mutation returns data that is used in a different query, then the results of that query will be updated. e.g. this query that returns all the todos
query {
todos {
id
description
status
}
}
Then if we mark a todo as completed with a mutation
mutation CompleteTodo {
markCompleted(id: 3) {
todo {
id
status
}
}
}
And the result is
{
todo: {
id: 3,
status: "completed"
__typename: "Todo"
}
}
Then the todo item with id 1 will have its status updated. The issue comes when the mutation tells the query it's stale, but doesn't provide enough information for the query to be updated. e.g.
query {
todos {
id
description
status
owner {
id
name
}
}
}
and
mutation CompleteTodo {
assignToUser(todoId: 3, userId: 12) {
todo {
id
owner {
id
}
}
}
}
Example result:
{
todo: {
id: 3,
owner: {
id: 12,
__typename: "User"
},
__typename: "Todo"
}
}
Imagine your app previously knew nothing about User:12. Here's what happens
The cache for Todo:3 knows that it now has owner User:12
The cache for User:12 contains just {id:12}, since the mutation didn't return the name field
The query cannot give accurate information for the name field for the owner without refetching (which doesn't happen by default). It updates to return data: {}
Possible solutions
Change the return query of the mutation to include all the fields that the query needs.
Trigger a refetch of the query after the mutation (via refetchQueries), or a different query that includes everything the cache needs
manual update of the cache
Of those, the first is probably the easiest and the fastest. There's some discussion of this in the apollo docs, but I don't see anything that describes the specific behavior of the data going empty.
https://www.apollographql.com/docs/angular/features/cache-updates.html
Tip of the hat to #clément-prévost for his comment that provided a clue:
Queries and mutations that fetch the same entity must query the same
fields. This is a way to avoid local cache issues.
After changing fetchPolicy to cache-and-network. It solved the issue. Link to fetchPolicy Documentation
Doing so, I also had to perform refetch query.

Resources