NestJS transform a property using ValidationPipe before validation execution during DTO creation - validation

I'm using the built in NestJS ValidationPipe along with class-validator and class-transformer to validate and sanitize inbound JSON body payloads. One scenario I'm facing is a mixture of upper and lower case property names in the inbound JSON objects. I'd like to rectify and map these properties to standard camel-cased models in our new TypeScript NestJS API so that I don't couple mismatched patterns in a legacy system to our new API and new standards, essentially using the #Transform in the DTOs as an isolation mechanism for the rest of the application. For example, properties on the inbound JSON object:
"propertyone",
"PROPERTYTWO",
"PropertyThree"
should map to
"propertyOne",
"propertyTwo",
"propertyThree"
I'd like to use #Transform to accomplish this, but I don't think my approach is correct. I'm wondering if I need to write a custom ValidationPipe. Here is my current approach.
Controller:
import { Body, Controller, Post, UsePipes, ValidationPipe } from '#nestjs/common';
import { TestMeRequestDto } from './testmerequest.dto';
#Controller('test')
export class TestController {
constructor() {}
#Post()
#UsePipes(new ValidationPipe({ transform: true }))
async get(#Body() testMeRequestDto: TestMeRequestDto): Promise<TestMeResponseDto> {
const response = do something useful here... ;
return response;
}
}
TestMeModel:
import { IsNotEmpty } from 'class-validator';
export class TestMeModel {
#IsNotEmpty()
someTestProperty!: string;
}
TestMeRequestDto:
import { IsNotEmpty, ValidateNested } from 'class-validator';
import { Transform, Type } from 'class-transformer';
import { TestMeModel } from './testme.model';
export class TestMeRequestDto {
#IsNotEmpty()
#Transform((propertyone) => propertyone.valueOf())
propertyOne!: string;
#IsNotEmpty()
#Transform((PROPERTYTWO) => PROPERTYTWO.valueOf())
propertyTwo!: string;
#IsNotEmpty()
#Transform((PropertyThree) => PropertyThree.valueOf())
propertyThree!: string;
#ValidateNested({ each: true })
#Type(() => TestMeModel)
simpleModel!: TestMeModel
}
Sample payload used to POST to the controller:
{
"propertyone": "test1",
"PROPERTYTWO": "test2",
"PropertyThree": "test3",
"simpleModel": { "sometestproperty": "test4" }
}
The issues I'm having:
The transforms seem to have no effect. Class validator tells me that each of those properties cannot be empty. If for example I change "propertyone" to "propertyOne" then the class validator validation is fine for that property, e.g. it sees the value. The same for the other two properties. If I camelcase them, then class validator is happy. Is this a symptom of the transform not running before the validation occurs?
This one is very weird. When I debug and evaluate the TestMeRequestDto object, I can see that the simpleModel property contains an object containing a property name "sometestproperty", even though the Class definition for TestMeModel has a camelcase "someTestProperty". Why doesn't the #Type(() => TestMeModel) respect the proper casing of that property name? The value of "test4" is present in this property, so it knows how to understand that value and assign it.
Very weird still, the #IsNotEmpty() validation for the "someTestProperty" property on the TestMeModel is not failing, e.g. it sees the "test4" value and is satisfied, even though the inbound property name in the sample JSON payload is "sometestproperty", which is all lower case.
Any insight and direction from the community would be greatly appreciated. Thanks!

You'll probably need to make use of the Advanced Usage section of the class-transformer docs. Essentially, your #Transform() would need to look something like this:
import { IsNotEmpty, ValidateNested } from 'class-validator';
import { Transform, Type } from 'class-transformer';
import { TestMeModel } from './testme.model';
export class TestMeRequestDto {
#IsNotEmpty()
#Transform((value, obj) => obj.propertyone.valueOf())
propertyOne!: string;
#IsNotEmpty()
#Transform((value, obj) => obj.PROPERTYTWO.valueOf())
propertyTwo!: string;
#IsNotEmpty()
#Transform((value, obj) => obj.PropertyThree.valueOf())
propertyThree!: string;
#ValidateNested({ each: true })
#Type(() => TestMeModel)
simpleModel!: TestMeModel
}
This should take an incoming payload of
{
"propertyone": "value1",
"PROPERTYTWO": "value2",
"PropertyThree": "value3",
}
and turn it into the DTO you envision.
Edit 12/30/2020
So the original idea I had of using #Transform() doesn't quite work as envisioned, which is a real bummer cause it looks so nice. So what you can do instead isn't quite as DRY, but it still works with class-transformer, which is a win. By making use of #Exclude() and #Expose() you're able to use property accessors as an alias for the weird named property, looking something like this:
class CorrectedDTO {
#Expose()
get propertyOne() {
return this.propertyONE;
}
#Expose()
get propertyTwo(): string {
return this.PROPERTYTWO;
}
#Expose()
get propertyThree(): string {
return this.PrOpErTyThReE;
}
#Exclude({ toPlainOnly: true })
propertyONE: string;
#Exclude({ toPlainOnly: true })
PROPERTYTWO: string;
#Exclude({ toPlainOnly: true })
PrOpErTyThReE: string;
}
Now you're able to access dto.propertyOne and get the expected property, and when you do classToPlain it will strip out the propertyONE and other properties (if you're using Nest's serialization interceptor. Otherwise in a secondary pipe you could plainToClass(NewDTO, classToPlain(value)) where NewDTO has only the corrected fields).
The other thing you may want to look into is an automapper and see if it has better capabilities for something like this.
If you're interested, here's the StackBlitz I was using to test this out

As an alternative to Jay's execellent answer, you could also create a custom pipe where you keep the logic for mapping/transforming the request payload to your desired DTO. It can be as simple as this:
export class RequestConverterPipe implements PipeTransform{
transform(body: any, metadata: ArgumentMetadata): TestMeRequestDto {
const result = new TestMeRequestDto();
// can of course contain more sophisticated mapping logic
result.propertyOne = body.propertyone;
result.propertyTwo = body.PROPERTYTWO;
result.propertyThree = body.PropertyThree;
return result;
}
export class TestMeRequestDto {
#IsNotEmpty()
propertyOne: string;
#IsNotEmpty()
propertyTwo: string;
#IsNotEmpty()
propertyThree: string;
}
You can then use it like this in your controller (but you need to make sure that the order is correct, i.e. the RequestConverterPipe must run before the ValidationPipe which also means that the ValidationPipe cannot be globally set):
#UsePipes(new RequestConverterPipe(), new ValidationPipe())
async post(#Body() requestDto: TestMeRequestDto): Promise<TestMeResponseDto> {
// ...
}

Related

Access context from Apollo GraphQL mutation field directive

I have an input type like this:
input PetGiraffe {
name: String #addUserLastName
}
Inside the directive, I need access to the request's context, so that I can add the user's last name to the giraffe's name. Here's the relevant part of what I've got so far:
const addUserLastNameDirective = {
typeDefs: gql`directive #addUserLastName on INPUT_FIELD_DEFINITION`,
transformer: (schema: GraphQLSchema, directiveName = 'addUserLastName') => {
return mapSchema(schema, {
[MapperKind.INPUT_OBJECT_FIELD]: (fieldConfig, fieldName, typeName, schema) => {
const directive = getDirective(schema, fieldConfig, directiveName)?.[0];
if (directive) {
// Need context in here because the user is in the context.
}
},
});
},
};
For queries, I understand you can override the fieldConfig.resolve method and get access to the context that way. But if I try that with this mutation, it throws: field has a resolve property, but Input Types cannot define resolvers.
The closest I could find was this from the graphql-tools docs, but that doesn't solve my problem of accessing the context.

NestJS GraphQL custom argument type

I'm trying to use LocalDate type from js-joda as parameter on GraphQL query like this:
#Query(() => DataResponse)
async getData(#Args() filter: DataFilter): Promise<DataResponse> { ... }
And here is filter type definition:
#ArgsType()
export class DataFilter {
#Field({ nullable: true })
#IsOptional()
date?: LocalDate;
#Field()
#Min(1)
page: number;
#Field()
#Min(1)
pageSize: number;
}
I've also registered LocalDate as scalar type and added it to application providers.
#Scalar('LocalDate', (type) => LocalDate)
export class LocalDateScalar implements CustomScalar<string, LocalDate> {
description = 'A date string, such as 2018-07-01, serialized in ISO8601 format';
parseValue(value: string): LocalDate {
return LocalDate.parse(value);
}
serialize(value: LocalDate): string {
return value.toString();
}
parseLiteral(ast: ValueNode): LocalDate {
if (ast.kind === Kind.STRING) {
return LocalDate.parse(ast.value);
}
return null;
}
}
This is the error I'm getting
[Nest] 9973 - 02/16/2022, 5:33:41 PM ERROR [ExceptionsHandler] year
must not be null NullPointerException: year must not be null
at requireNonNull (/Users/usr/my-app/node_modules/#js-joda/core/src/assert.js:33:15)
at new LocalDate (/Users/usr/my-app/node_modules/#js-joda/core/src/LocalDate.js:284:9)
at TransformOperationExecutor.transform (/Users/usr/my-app/node_modules/src/TransformOperationExecutor.ts:160:22)
at TransformOperationExecutor.transform (/Users/usr/my-app/node_modules/src/TransformOperationExecutor.ts:333:33)
at ClassTransformer.plainToInstance (/Users/usr/my-app/node_modules/src/ClassTransformer.ts:77:21)
at Object.plainToClass (/Users/usr/my-app/node_modules/src/index.ts:71:27)
at ValidationPipe.transform (/Users/usr/my-app/node_modules/#nestjs/common/pipes/validation.pipe.js:51:39)
at /Users/usr/my-app/node_modules/#nestjs/core/pipes/pipes-consumer.js:17:33
at processTicksAndRejections (node:internal/process/task_queues:96:5)
I'm not sure why is this exactly happening but from what I've managed to debug, is that LocalDateScalar defined above is transforming the value from string to LocalDate correctly, but the problem is that class-transformer is also trying to transform the value, and since it's already transformed it recognizes it as object, which is automatically being call through parameterless constructor and it's causing this error.
This is the line from class-transformer that's calling the constructor
newValue = new (targetType as any)();
Is there maybe a way to tell class-transformers which types to ignore? I'm aware of the #Exclude attribute, but then property is completely excluded, I just need to exclude property being transformed via plainToClass method of class-transformer. Or this whole situation should be handled differently?
Any suggestion will be well appreciated.
Not sure if this is the right solution but I had a similar scalar <string, Big> working with the following decorators:
#Field(() => AmountScalar) // your actual scalar class
#Type(() => String) // the "serialized" type of the scalar
#Transform(({ value }) => {
return Big(value) // custom parse function
})
amount: Big // the "parsed" type of the scalar
The two custom parse functions in the scalar can also contain some validation steps (like moment.isValid() in your case) since it will be called before class-validator.

NestJS GraphQL federation circular resolvers

I am working on an existing GraphQL service, that I have successfully broken down to smaller services, using apollo federation. I have some types being extended by other services and everything works just fine. However, as I followed this example: https://docs.nestjs.com/graphql/federation now I have a circular reference kind of problem.
So basically I have, for example two types:
#ObjectType()
#Directive('#key(fields: "id")')
export class Original {
#Field(type => ID)
id: string;
...
}
// extending it in the other service
#ObjectType()
#Directive('#extends')
#Directive('#key(fields: "id")')
export class Original {
#Field(type => ID)
#Directive('#external')
id: string;
#Field(type => [Other])
#Directive('#requires(fields: "id")')
others: Other[];
...
}
#ObjectType()
#Directive('#key(fields: "id")')
export class Other {
#Field(type => ID)
id: string;
...
#Field(type => Original, { nullable: true })
original?: Original;
}
And I have two resolvers, both in the service extending the original type:
#Resolver(of => Original)
export class OriginalResolver {
...
#ResolveField(returns => [Other])
async others(#Parent() original: Original) {
const { id} = original;
...
}
}
#Resolver(of => Other)
export class OtherResolver {
...
#ResolveField((of) => Original)
async original(#Parent() other: Other) {
return { __typename: 'Orignal', id: other.original.id };
}
}
As the resolvers suggest, I can have a query with something like this:
...,
original{
others{
original{
*and so on...*
}
}
}
I don't want this circular query to be possible and I am trying to remove it, but so far I had no luck. If I simply remove the "original" field resolver, where it should return the __typename, apollo just won't extend the original type anymore. I guess that line is what basically connects the two services to find the original type, but I am not that deep in apollo so far...
So my question is how could I remove that resolver all together OR if that just has to be there for apollo to work, is there any way to "hide it"?
Thanks in advance and feel free to ask for any more info you might need.
It's fully legal to have 'loops' in GraphQL (notice 'graph'). GraphQL 'by design' gives the ability to freely shape the query [and structure of the response] including 'loops' creation.
I wouldn't say it's 'circular reference kind of problem'. It can be an efficiency/performance problem ... **This is not an evil ... when not abused.
You can use some metrics to limit API usage, restrict 'max resolving depth level'/etc.
In this case, you can simply return null in original resolver when parent is others type. This way original can be queried only on query top/root level.

In GraphQL, how to "aggregate" properties

Excuse the vague code, I can't really copy/paste. :)
I have type in GraphQL like this:
type Thing {
toBe: Boolean
orNot: Boolean
}
I'm trying to create a new property on this type that is an... aggregate of those two. Basically return a new value based upon those values. The code would be like:
if (this.toBe && !this.orNot) { return "To be!"; }
if (!this.toBe && !this.orNot) { return "OrNot!"; }
Does this make sense? So it would return something like:
Thing1 {
toBe: true;
orNot: false;
newProp: "To be!"
}
Yes, you can easily create aggregated fields in your graphql Object types by handling your required logic in that aggregated field resolver. While creating object types, you have instance of that object, and therefore, you can easily create aggregated fields which are not present in your domain models using object's data and this is one of the beauty of graphql. Note that this can differ on each implementation of GraphQL libraries. Following is the example for such use case in JavaScript and Scala.
Example in Graphql.js:
var FooType = new GraphQLObjectType({
name: 'Foo',
fields: {
toBe: { type: GraphQLBoolean},
orNot: { type: GraphQLBoolean},
newProp: { type: GraphQLString,
resolve(obj) {
if (obj.toBe && !obj.orNot) { return "To be!"; }
else { return "OrNot!"; }
}
}
});
Example in Sangria-graphql:
ObjectType(
"Foo",
"graphql object type for foo",
fields[Unit, Foo](
Field("toBe",BooleanType,resolve = _.value.name),
Field("orNot",BooleanType,resolve = _.value.path),
Field("newProp",StringType,resolve = c => {
if (c.value.toBe && !c.value.orNot) "To be!" else "OrNot!"
})
)
)
The various GraphQL server library implementations all have ways to provide resolver functions that can provide the value for a field. You'd have to include it in your schema and write the code for it, but this is a reasonable thing to do and the code you quote is a good starting point.
In Apollo in particular, you pass a map of resolvers that get passed as a resolvers: option to the ApolloServer constructor. If a field doesn't have a resolver it will default to returning the relevant field from the native JavaScript object. So you can write
const resolvers = {
Thing: {
newProp: (parent) => {
if (parent.toBe && !parent.orNot) { return "To be!"; }
if (!parent.toBe && !parent.orNot) { return "OrNot!"; }
return "That is the question";
}
}
};

How to create generics with the schema language?

Using facebook's reference library, I found a way to hack generic types like this:
type PagedResource<Query, Item> = (pagedQuery: PagedQuery<Query>) => PagedResponse<Item>
​
interface PagedQuery<Query> {
query: Query;
take: number;
skip: number;
}
​
interface PagedResponse<Item> {
items: Array<Item>;
total: number;
}
function pagedResource({type, resolve, args}) {
return {
type: pagedType(type),
args: Object.assign(args, {
page: { type: new GraphQLNonNull(pageQueryType()) }
}),
resolve
};
function pageQueryType() {
return new GraphQLInputObjectType({
name: 'PageQuery',
fields: {
skip: { type: new GraphQLNonNull(GraphQLInt) },
take: { type: new GraphQLNonNull(GraphQLInt) }
}
});
}
function pagedType(type) {
return new GraphQLObjectType({
name: 'Paged' + type.toString(),
fields: {
items: { type: new GraphQLNonNull(new GraphQLList(type)) },
total: { type: new GraphQLNonNull(GraphQLInt) }
}
});
}
}
But I like how with Apollo Server I can declaratively create the schema. So question is, how do you guys go about creating generic-like types with the schema language?
You can create an interface or union to achieve a similar result. I think this article does a good job explaining how to implement interfaces and unions correctly. Your schema would look something like this:
type Query {
pagedQuery(page: PageInput!): PagedResult
}
input PageInput {
skip: Int!
take: Int!
}
type PagedResult {
items: [Pageable!]!
total: Int
}
# Regular type definitions for Bar, Foo, Baz types...
union Pageable = Bar | Foo | Baz
You also need to define a resolveType method for the union. With graphql-tools, this is done through the resolvers:
const resolvers = {
Query: { ... },
Pageable {
__resolveType: (obj) => {
// resolve logic here, needs to return a string specifying type
// i.e. if (obj.__typename == 'Foo') return 'Foo'
}
}
}
__resolveType takes the business object being resolved as its first argument (typically your raw DB result that you give GraphQL to resolve). You need to apply some logic here to figure out of all the different Pageable types, which one we're handling. With most ORMs, you can just add some kind of typename field to the model instance you're working with and just have resolveType return that.
Edit: As you pointed out, the downside to this approach is that the returned type in items is no longer transparent to the client -- the client would have to know what type is being returned and specify the fields for items within an inline fragment like ... on Foo. Of course, your clients will still have to have some idea about what type is being returned, otherwise they won't know what fields to request.
I imagine creating generics the way you want is impossible when generating a schema declaratively. To get your schema to work the same way it currently does, you would have to bite the bullet and define PagedFoo when you define Foo, define PagedBar when you define Bar and so on.
The only other alternative I can think of is to combine the two approaches. Create your "base" schema programatically. You would only need to define the paginated queries under the Root Query using your pagedResource function. You can then use printSchema from graphql/utilities to convert it to a String that can be concatenated with the rest of your type definitions. Within your type definitions, you can use the extend keyword to build on any of the types already declared in the base schema, like this:
extend Query {
nonPaginatedQuery: Result
}
If you go this route, you can skip passing a resolve function to pagedResource, or defining any resolvers on your programatically-defined types, and just utilize the resolvers object you normally pass to buildExecutableSchema.

Resources