When writing queries I can define a resolver on any field and that field’s value will be determined by its resolver, regardless of query depth.
However when writing a mutation I seem to only be able to define the resolvers at the root level. Adding a resolve method to fields in my args or input type does not seem to have any affect.
What’s the best way deal with nested input in mutations?
What do you mean by nested input in your mutations? GraphQL input types does not have resolvers. With resolvers you are just determining how to fetch results. If you would like to have nested input, e.g. for example I would like to create user also with company. I will then define CreateUserInput CreateCompanyInput type for example like this in SDL
input CreateCompanyInput {
name: String!
type: CompanyEnum!
}
input CreateUserInput {
username: String!
firstname: String!
lastname: String!
company: CreateCompanyInput!
}
type Mutation {
createUser(input: CreateUserInput!): User
}
This way I am basically nesting arguments and can implement more complex mutations. In addition I can reuse the CreateCompanyInput for createCompany mutation if I need mutation even for that. I will then have the whole CreateUserInput even with CreateCompanyInput in the createUser resolver as input argument. I can apply transactions as I will create two new records etc. Not sure if it is what you mean by nested input if you mean something else. Just let me know :)
Related
I am looking to specify a certain required combination of parameters in a graphQL query.
The query should be valid either without any params and return all cats or filter by size AND species.
extend type Query {
cats(size: String, species: String): [Cat]
}
Is the only way to do this via the resolver (throw error if one arg is passed) or is there a neater way?
I don't believe that this is defined in the spec. You could define a new input type and then use this though.
input CatFilter {
size: String!
species: String!
}
extend type Query {
cats(filter: CatFilter): [Cat]
}
That way the parameter is optional, but if given, both properties are required.
In Graphql, I have the following mutation:
activateUserAccount(token: String!): ActivateUserAccountResult!
the type is currently:
type ActivateUserAccountResult{
id: Int
activated_on: String
}
The problem with this is that the type ActivateUserAccountResult will most likely never get used by anything else. So I was wondering if i created a more generic UserAccount type as follows and returned that instead, can you think of why this might be bad?
activateUserAccount(token: String!): UserAccount !
type UserAccount {
id: Int
activated_on: String
email: String
... other db fields here
}
The client would obviously still only request id and activated_on and ignore the other fields. The only downside i can think of is that you cant enforce activated_on by adding an exclamation mark (activated_on!).
Which is better practise? Better re-usability or more concise?
I'm a newbie to Prisma/GraphQL. I'm writing a simple ToDo app and using Apollo Server 2 and Prisma GraphQL for the backend. I want to convert my createdAt field from the data model to something more usable on the front-end, like a UTC date string. My thought was to convert the stored value, which is a DateTime.
My datamodel.prisma has the following for the ToDo type
type ToDo {
id: ID! #id
added: DateTime! #createdAt
body: String!
title: String
user: User!
completed: Boolean! #default(value: false)
}
The added field is a DataTime. But in my schema.js I am listing that field as a String
type ToDo {
id: ID!
title: String,
added: String!
body: String!
user: User!
completed: Boolean!
}
and I convert it in my resolver
ToDo: {
added: async (parent, args) => {
const d = new Date(parent.added)
return d.toUTCString()
}
Is this OK to do? That is, have different types for the same field in the datamodel and the schema? It seems to work OK, but I didn't know if I was opening myself up to trouble down the road, following this technique in other circumstances.
If so, the one thing I was curious about is why accessing parent.added in the ToDo.added resolver doesn't start some kind of 'infinite loop' -- that is, that when you access the parent.added field it doesn't look to the resolver to resolve that field, which accesses the parent.added field, and so on. (I guess it's just clever enough not to do that?)
I've only got limited experience with Prisma, but I understand you can view it as an extra back-end GraphQL layer interfacing between your own GraphQL server and your data (i.e. the database).
Your first model (datamodel.prisma) uses enhanced Prisma syntax and directives to accurately describe your data, and is used by the Prisma layer, while the second model uses standard GraphQL syntax to implement the same object as a valid, standard GraphQL type, and is used by your own back-end.
In effect, if you looked into it, you'd see the DateTime type used by Prisma is actually a String, but is likely used by Prisma to validate date & time formats, etc., so there is no fundamental discrepancy between both models. But even if there was a discrepancy, that would be up to you as you could use resolvers to override the data you get from Prisma before returning it from your own back-end.
In short, what I'm trying to say here is that you're dealing with 2 different GraphQL layers: Prisma and your own. And while Prisma's role is to accurately represent your data as it exists in the database and to provide you with a wide collection of CRUD methods to work with that data, your own layer can (and should) be tailored to your specific needs.
As for your resolver question, parent in this context will hold the object returned by the parent resolver. Imagine you have a getTodo query at the root Query level returning a single item of type ToDo. Let's assume you resolve this to Prisma's default action to retrieve a single ToDo. According to your datamodel.prisma file, this query will resolve into an object that has an added property (which will exist in your DB as the createdAt field, as specified by the #createdAt Prisma directive). So parent.added will hold that value.
What your added resolver does is transform that original piece of data by turning it into an actual Date object and then formatting it into a UTC string, which conforms to your schema.js file where the added field is of type String!.
I have this schema on my graphcool:
type User #model {
id: ID! #isUnique
name: String!
email: String!
password: String!
}
Using playground, I can execute this properly:
query {
User(id: "1234") {
id
name
}
}
But this query:
query {
User(name: "Thomas") {
id
name
}
}
throws error:
Unknown argument 'name' on field 'User' of type 'Query'. (line 2,
column 8):
User(name: "Thomas").
Why? And how to fix this? From my pov, anything that's already on the model, can be queried immediately, right? Btw, I'm very newbie in graphQL, and there's almost no article talk about this error (every tutorial just like assume that this will immediately works), so please give some more elaborate answer if necessary.
GraphQL does not intrinsically allow arbitrary queries against objects.
Somewhere in your schema there will be an additional declaration like
type Query {
User(id: ID!): User
}
The names in the Query type are the top-level queries you can run, and the arguments listed in that query are the only arguments they accept. (There is a corresponding Mutation type for top-level mutations, which can change the underlying state, and use the mutation keyword in a query.)
If you control the server implementation, you could add a parameter or an additional top-level query
userByName(name: String!): User
but you'd also have to provide an implementation of this query or handle the additional parameter, which is a code change.
I've recently started to research the possibility of using GraphQL for requesting dynamic data configurations. The very first thing that jumps out at me is the strongly-typed concept of GraphQL.
Is there a way for GraphQL schemas to handle arrays of mixed type objects? I would greatly appreciate either an explanation or possibly a reference I can read over.
I am currently working with GraphQL with Node.js but a later implementation will be out of a Java Container. All data will be JSON pulled from MongoDB.
You either have to make these disparate types implement the same interface, make your resolvers return unions, or create a custom scalar to hold the dynamic data.
The cleanest approach is the first one: if your resulting objects can be of a limited number of types, define the types so that they implement the same interface, and type your resolvers by the interface. This allows the client to conditionally select sub-fields based on the actual type, and you maintain type safety.
The second approach has similar limitations: you need to know the possible types ahead of time, but they do not have to implement the same interface. It is preferable when the possible values are unrelated to each other and have either/or semantics, like success/failure.
The custom scalar approach is the only one in which you do not need to know the possible types of the result, i.e. the structure of the result can be completely dynamic. Here's an implementation of that approach, known as JSON scalar (i.e. cram any JSON-serializable structure into a scalar value). The big downside of this approach is that it makes sub-selection impossible, as the entire value becomes one big scalar (even though it's a complex object).
Since the question is asking about an array of objects of unknown types, I'll point out that you can, of course, have a list of all the options above.
Examples:
#Interface for any search result
interface SearchResult {
title: String!
url: String!
}
#A specific kind of search result
type Book implements SearchResult {
title: String!
url: String!
author: Author!
isbn: String!
}
type Article implements SearchResult {
title: String!
url: String!
categories: [Category]!
}
type Query {
#Search can return a mix of Books and Articles
search(keyword: String!): [SearchResult!]
}
Or
#No interface this time
type Book {
name: String! #No common fields with Article
author: Author!
publisher: Publisher!
}
type Article {
title: String!
url: String!
categories: [Category]!
}
union SearchResult = Book | Article
type Query {
#Search can return a mix of Books and Articles
search(keyword: String!): [SearchResult!]
}
Or
scalar JSON
type Query {
#Search can return anything at all... All bets are off
search(keyword: String!): [JSON!]
}
If data is completely JSON and you would rather preserve them as is, check out JSON scalar type. Basically,
import { GraphQLObjectType } from 'graphql';
import GraphQLJSON from 'graphql-type-json';
export default new GraphQLObjectType({
name: 'MyType',
fields: {
myField: { type: GraphQLJSON },
},
});
I think it's possible to make a custom/generic type that will fit the need.
So that way it's still a strong typed array but the type will be flexable enough to set what you need.
Here is an example with custom types:
https://github.com/stylesuxx/graphql-custom-types