GraphQL mutation to add item to existing array field - graphql

I have a model object that has a status log and I'd like to be able to add a new log item without having to replace the whole list. Is there a way to do this?
Here's a simplified schema. My projects have an array of StatusLog. What I'd like to do is push a new one onto the existing list without having to give the mutation the whole list each time as this will just get longer and longer.
type StatusLog {
status: String!
timestamp: String!
}
type Project #model #versioned {
id: ID!
statusLog: [StatusLog]
PS is there a command-line way to take an annotated graphql schema like this (i.e. with the #model and #versioned tags) and generate the code locally without having to go through amplify api push?

Related

Can one have different types for same field between Prisma GraphQL schema and datamodel?

I'm a newbie to Prisma/GraphQL. I'm writing a simple ToDo app and using Apollo Server 2 and Prisma GraphQL for the backend. I want to convert my createdAt field from the data model to something more usable on the front-end, like a UTC date string. My thought was to convert the stored value, which is a DateTime.
My datamodel.prisma has the following for the ToDo type
type ToDo {
id: ID! #id
added: DateTime! #createdAt
body: String!
title: String
user: User!
completed: Boolean! #default(value: false)
}
The added field is a DataTime. But in my schema.js I am listing that field as a String
type ToDo {
id: ID!
title: String,
added: String!
body: String!
user: User!
completed: Boolean!
}
and I convert it in my resolver
ToDo: {
added: async (parent, args) => {
const d = new Date(parent.added)
return d.toUTCString()
}
Is this OK to do? That is, have different types for the same field in the datamodel and the schema? It seems to work OK, but I didn't know if I was opening myself up to trouble down the road, following this technique in other circumstances.
If so, the one thing I was curious about is why accessing parent.added in the ToDo.added resolver doesn't start some kind of 'infinite loop' -- that is, that when you access the parent.added field it doesn't look to the resolver to resolve that field, which accesses the parent.added field, and so on. (I guess it's just clever enough not to do that?)
I've only got limited experience with Prisma, but I understand you can view it as an extra back-end GraphQL layer interfacing between your own GraphQL server and your data (i.e. the database).
Your first model (datamodel.prisma) uses enhanced Prisma syntax and directives to accurately describe your data, and is used by the Prisma layer, while the second model uses standard GraphQL syntax to implement the same object as a valid, standard GraphQL type, and is used by your own back-end.
In effect, if you looked into it, you'd see the DateTime type used by Prisma is actually a String, but is likely used by Prisma to validate date & time formats, etc., so there is no fundamental discrepancy between both models. But even if there was a discrepancy, that would be up to you as you could use resolvers to override the data you get from Prisma before returning it from your own back-end.
In short, what I'm trying to say here is that you're dealing with 2 different GraphQL layers: Prisma and your own. And while Prisma's role is to accurately represent your data as it exists in the database and to provide you with a wide collection of CRUD methods to work with that data, your own layer can (and should) be tailored to your specific needs.
As for your resolver question, parent in this context will hold the object returned by the parent resolver. Imagine you have a getTodo query at the root Query level returning a single item of type ToDo. Let's assume you resolve this to Prisma's default action to retrieve a single ToDo. According to your datamodel.prisma file, this query will resolve into an object that has an added property (which will exist in your DB as the createdAt field, as specified by the #createdAt Prisma directive). So parent.added will hold that value.
What your added resolver does is transform that original piece of data by turning it into an actual Date object and then formatting it into a UTC string, which conforms to your schema.js file where the added field is of type String!.

How to use same generated ID in two fields prisma-graphql

I'm implementing a graphql prisma datamodel. Here I have a type called BankAccount . I may need to update and delete them as well. I'm implementing this as immutable object. So, when updating I'm adding updating the existing record as IsDeleted and add a new record. And when updating an existing record I need to keep the id of the previous record to know which record is updated. So, I've came up with a type like this
type BankAccount {
id: ID! #unique
parentbankAccount: String!
bankName: String!
bankAccountNo: String!
isDeleted: Boolean! #default(value: "false")
}
Here the parentBankAccount keeps the id of previous BankAccount. I'm thinking when creating a bank account, setting the parentBankAccount as same as the id as it doesn't have a parent. The thing is I'm not sure it's possible. I'm bit new to GraphQL. So, any help would be appreciated.
Thanks
In GraphQL, generally if one object refers to another, you should directly refer to that object; you wouldn't embed its ID. You can also make fields nullable, to support the case where some relationship just doesn't exist. For this specific field, then, this would look like
type BankAccount {
parentBankAccount: BankAccount
...
}
and that field would be null whenever an account doesn't have a parent.
At an API level, the layout you describe seems a little weird. If I call
query MyBankAccount {
me { accounts { id } }
}
I'll get back some unique ID. I'd be a little surprised to later call
query MyBalance($id: ID!) {
node(id: $id) {
... on BankAccount {
name
isDeleted
balance
}
}
}
and find out that my account has been "deleted" and that the balance is from a week ago.
Using immutable objects in the underlying data store makes some sense, particularly for auditability reasons, but that tends to not be something you can expose out through a GraphQL API directly (or most other API layers: this would be equally surprising in a REST framework where the object URL is supposed to be permanent).

AWS Appsync: How can I create a resolver that retrieves the details for an array of identifiers?

This feels basic, so I would expect to find this scenario mentioned, but I have searched and can't find an example that matches my scenario. I have 2 end points (I am using HTTP data sources) that I'm trying to combine.
Class:
{
id: string,
students: [
<studentID1>,
<studentID2>,
...
]
}
and Student:
{
id: String,
lastName: String
}
What I would like is a schema that looks like this:
Student: {
id: ID!
lastName: String
}
Class: {
id: ID!,
studentDetails: [Student]
}
From reading, I know that I need some sort of resolver on Class.studentDetails that will return an array/List of student objects. Most of the examples I have seen show retrieving the list of Students based on class ID (ctx.source.id), but that won't work in this case. I need to call the students endpoint 1 time per student, passing in the student ID (I cannot fetch the list of students by class ID).
Is there a way to write a resolver for Class/studentDetails that loops through the student IDs in Class and calls my students endpoint for each one?
I was thinking something like this in the Request Mapping Template:
#set($studentDetails = [])
#foreach($student in $ctx.source.students)
#util.qr(list.add(...invoke web service to get student details...))
#end
$studentDetails
Edit: After reading Lisa Shon's comment below, I realized that the batch resolver for DynamoDB data sources that does this, but I don't see a way to do that for HTTP data sources.
It's not ideal, but you can create an intermediate type.
type Student {
id: ID!
lastName: String
}
type Class {
id: ID!,
studentDetails: [StudentDetails]
}
type StudentDetails {
student: Student
}
In your resolver template for Class, create a list of those student ids
#foreach ($student in $class.students)
$util.qr($studentDetails.add({"id": "$student.id"}))
#end
and add it to your response object. Then, hook a resolver to the student field of StudentDetails and you will then be able to use $context.source.id for the individual student API call. Each id will be broken out of the array and be its own web request.
I opened a case with AWS Support and was told that the only way they know to do this is to create a Lambda Resolver that:
Takes an array of student IDs
Calls the students endpoint for each one
Returns an array of student details information
Instead of calling your student endpoint in the response, use a pipeline resolver and stitch the response from different steps using stash, context (prev.result/result), etc.

How to load the graphql queries from the server without defining it in the front end?

Now let's say we are using a REST API. I have one endpoint like this: /homeNewsFeed. This API will give us a response like this:
[
{
blockTitle: 'News',
type: 'list',
api: 'http://localhost/news'
},
{
blockTitle: 'Photos',
type: 'gallery',
api: 'http://localhost/gallery'
}
]
Now after getting this we go through the array and call the respective endpoints to load the data. My question is, how to do this in GraphQL? Normally we define the query in the front end code. Without doing that, how to let the server decide what to send?
The main reason to do this is. Imagine we have a mobile app. We need to push new blocks to this news feed without sending an app update. But each item can have their own query.
Normally we define the query in the front end code. Without doing that, how to let the server decide what to send?
Per the spec, a GraphQL execution request must include two things: 1) a schema; and 2) a document containing an operation definition. The operation definition determines what operation (which query or mutation) to execute as well as the format of the response. There are work arounds and exceptions (I'll discuss some below), but, in general, if specifying the shape of the response on the client-side is undesirable or somehow not possible, you should carefully consider whether GraphQL is the right solution for your needs.
That aside, GraphQL lends itself more to a single request, not a series of structured requests like your existing REST API requires. So the response would look more like this:
[
{
title: 'News',
content: [
...
],
},
{
title: 'Photos',
content: [
...
],
}
]
and the corresponding query might look like this:
query HomePageContent {
blocks {
title
content {
# additional fields
}
}
}
Now the question becomes how do differentiate between different kinds of content. This is normally solved by utilizing an interface or union to aggregate multiple types into a single abstract type. The exact structure of your schema will depend on the data you're sending, but here's an example:
interface BlockContentItem {
id: ID!
url: String!
}
type Story implements BlockContentItem {
id: ID!
url: String!
author: String!
title: String!
}
type Image implement BlockContentItem {
id: ID!
url: String!
alt: String!
}
type Block {
title: String!
content: [BlockContentItem!]!
}
type Query {
blocks: [Block!]!
}
You can now query blocks like this:
query HomePageContent {
blocks {
title
content {
# these fields apply to all BlockContentItems
__typename
id
url
# then we use inline fragments to specify type-specific fields
... on Image {
alt
}
... on Story {
author
title
}
}
}
}
Using inline fragments like this ensures type-specific fields are only returned for instances of those types. I included __typename to identify what type a given object is, which may be helpful to the client app (clients like Apollo automatically include this field anyway).
Of course, there is still the issue of what happens when you want to add a new block. If the block's content fits an existing type, no sweat. But what happens when you anticipate you will need a different type in the future, but can't design around that right now?
Typically, that sort of change would require both a schema change on the server and a query change on the client. And in most cases, this will probably be fine because if you're getting data in a different structure, you will have to update your client app anyway. Otherwise, your app won't know how to render the new data structure correctly.
But let's say we want to future-proof our schema anyway. Here's two ways you could go about doing it.
Instead of specifying an interface for content, just utilize a custom JSON scalar. This will effectively throw the response validation out the window, but it will allow you to return whatever you want for the content of a given block.
Abstract out whatever fields might be needed in the future into some kind of value-key type. For example:
.
type MetaItem {
key: String!
value: String!
}
type Block {
title: String!
meta: [MetaItem!]!
# other common fields
}
There's any number of other workarounds, some better than others depending on the kind of data you're working with. But hopefully that gives you some idea how to address the scenario you describe in a GraphQL context.

apollo-codegen output is empty

I'm running into a situation in which apollo-codegen is not successfully generating typescript code.
For the graphql file (generated/schema.graphql):
type Author {
id: Int!
firstName: String
lastName: String
posts: [Post]
}
type Post {
id: Int!
title: String
author: Author
votes: Int
}
I then run :
$apollo-codegen introspect-schema generated/schema.graphql --output generated/schema.json
this generates a ./generated/schema.json that appears to contain the relevant information (I see information about Author and its properties, and Post and its properties).
I then run
$apollo-codegen generate generated/schema.graphql --schema generated/schema.json --target typescript and get an (effectively) empty output.
// This file was automatically generated and should not be edited.
/* tslint:disable */
/* tslint:enable */
I've tried generating .swift files as well, with similar empty output.
Apollo codegen version is:
"apollo-codegen": "^0.11.2",
Anyone see if what I'm doing wrong?
I'm a collaborator on apollo-codegen. Happy to hear that you're giving it a try!
You're not seeing any output because apollo-codegen generates types based on the GraphQL operations (query, mutation) in your project -- not based solely on the types in your schema.
In our experience, it's very rare that you would send a query for a full GraphQL type. Instead, we have found types based on graphql operations to be the most useful.
For instance, given the types you've provided, you might write a query:
query AuthorQuery {
authors {
firstName,
lastName
}
}
The type that would get generated (and that you'd probably want to use in code that consumes the results of this query, is
type AuthorQuery = {
authors: Array<{
firstName: string,
lastName: string
}>
}
Notice how you would use the AuthorQuery type in your React component (or similar) whereas you wouldn't use an Author type since it would include more fields than you've actually requested.
If you do however have a use-case for a 1:1 type from your graphql schema to typescript, do file an issue on the project itself and I'd be happy to discuss there :)

Resources