Is it okay for a resolver to have side effects besides resolving the type? - graphql

When creating a GraphQL mutation or query, you usually retrieve or save data. But let's assume I would like to send an e-mail when the data is saved or perform some additional side effects.
Is it common practice for resolvers to have these kind of side effects? Since resolvers should only resolve data according to the SOLID principle, right?
If it turns out that the resolver should not have side effects like these, then where would the side effects belong?
Most tutorials and articles online, including the official tutorial of GraphQL itself, don't seem to cover this or take it into account.
Many thanks!

It depends on that resolver is resolving fields of what types.
If it is resolving the fields of the root mutation ,it must involve sides effects as mutation is supposed to modify server-side data. So it is okay to send an email in root mutation field 's resolver.
For the resolver of object type and root query fields , as it is supposed to only retrieving data, it is awkward for me if they have any side effects.

Related

Conditionally disable Apollo cache normalization for certain usage of a type?

I have a situation, using Apollo InMemoryCache on a React client, where I'd like to be able to instruct Apollo not to use cache normalization for certain nodes in the graph without having to disable caching entirely for that type. Is this possible?
To better explain what I mean: Say that I have an entity Person, that I generally want Apollo to use cache for, but I also have an endpoint called PersonEvent that has these two fields:
old: Person!
new: Person!
This returns two historic snapshots of the same person, used for showing what changed on a certain event in time. The problem is that with cache normalization turned on for Person, the cache would interpret old and new as being the same instance since they have the same id and __typename, and then replacing it with the same reference.
I know it is possible to configure Apollo not to normalize objects of a certain type, using the config code below, but then all caching of Person objects is disabled, and that's not what I want:
typePolicies: {
Person: {
keyFields: false
}
}
So my question is: What would be the best practice way to handle this situation? I think it's kind of a philosofical question to it as well: "Is a snapshot of a person, a person?". I could potentially ask the backend dev to add some sort of timestamp to the Person entity so that it could be used to build a unique ID, but I also feel like that would be polluting the Person object as the timestamp is only relevant in case of a snapshot (which is an edgecase). Is this a situation that should generally be solved on the client-side or the server-side?
Given that the graph is as it is, I'd like to only instruct Apollo not to cache the old/new fields on PersonEvent, but I haven't found a way to achieve that yet.
To get philosophical with you:
Is a snapshot of a person, a person?
I think you're answering your question by the problem you're having. The point of a cache is that you can set a value by its ID and you can load that value by its ID. The same can be said for any entity. The cache is just a way of loading the entity. Your Person object appears to be an entity. I'm guessing based on your conundrum that this is NOT true for this snapshot; that it isn't "an entity"; that it doesn't have "an ID" (though it may contain the value of something else's id).
What you are describing is an immutable value object.
And so, IMO, the solution here would be to create a type that represents a value object, and as such is uncacheable: PersonSnapshot (or similar).

Entities or Models in NestJs code first GraphQl

I am new to NestJs and GraphQl, I am learning going over some tutorials. It appears to be an inconstancy in the usage of the terminology model or entity. The nestjs schematics resource generator for graphql code first produces entities, yet the example shown on their website use models.
produces entities:
nx generate #nestjs/schematics:resource generated --language=ts --type=graphql-code-first
uses models no mention of entities in code first approach
https://docs.nestjs.com/graphql/resolvers
which one terminology is most appropriate?
Thank You,
Michael
Both are generally correct. It comes down to naming preferences.
I view entities as database entities, or database table maps. They map from your database data to a class representation that your code will understand. Models can also be used for this, which I believe is the term that sequilize and mongoose prefer to use.
Models, as described in the docs you linked, are generally your DTOs, your schema objects that you expect the API to accept and respond with.
You'll notice that the generator also generates two #InputType() files as well, which will be more closely tied to your incoming DTO while the entity.ts will be closer to your response DTO.
So, both are correct, and it comes down to naming preferencec.

Is it essential to send response data to dto when working with graphql?

Hi I am making api server with nestjs and graphql.
I have a question.
When the api server passes the response to the stack above.
Is it right to load the response into the dto?
If question number one is correct, what is the best way?
Or does the graphql schema type play the role of dto?
let me know thank you!
A DTO is an object that helps developers and consumers know what shape the data is going to be in as it goes "over the wire" (when the request or response is made). In GraphQL we use schemas to be our DTOs and follow the GraphQL Query Language spec so that our GraphQL server will know how to deserialize incoming requests and serialize outgoing ones in accordance to the shcemas we create. The only thing that matters in the end is that the data is the correct shape^, not that it's an instance of a class that we created for the sake of the server. With NestJS, it depends if you take the code-first or schema-first approach, but generally in the code-first approach it can be said that your GQL Schema is your DTO. From there, as long as the data looks like the DTO you should be fine.
^ excluding the use of nested schemas which can become problematic if you are trying to return raw JSON and nothing else

GraphQL field-level validation in AppSync

I have an AppSync API that's mostly backed by a DynamoDB store. Most of the resolvers are hooked up directly to the DynamoDB sources, not using lambdas.
Some of the fields should have validation constraints, such as length or a regexp. In one particular case I would like to require that a state field contain an ISO 3166-2 value like US-NY. (GraphQL enums values can't contain hyphens, so that isn't an option here.)
Other than replacing some resolvers with lambdas, the only way I can think of to apply these sorts of validation rules is to do it in VTL in the RequestMappingTemplate. That would work, but it would be tedious and likely result in duplicate code. Are there alternatives?
Unfortunately, only way without lambda is VTL , I suggest that instead of writing validation directly inside RequestMappingTemplate, using pipeline resolver.(less duplicated)
Pipeline Resolvers contain one or more Functions which are executed in order.
Functions allow you to write common logic for reuse across multiple Resolvers in your schema. They are attached directly to a data source and like a Unit resolver, contain the same request and response mapping template format.
You can find a good example here.

Is there any mention in graphql specification about graphql delete mutation return type

I saw some graphql implementions that returns the whole object after deletion and some implementions that return only the id of the deleted object.
What is the right way according to graphql specification?
The specification is not there to dictate API design decisions nor even prescribe best practices. It's there to make sure different GraphQL engines and clients are compatible between each other.
As for you question, there's no right or wrong answer. Do what makes sense for your use-case. If you take an ID as the input for deletion, it makes sense to return the whole object. If you accept the whole object already, there's not much benefit in returning the exact same thing right back...
Decide what makes sense and keep your API consistent across the operations.

Resources