TLNR: I was trying to test DTO validation in the controller spec instead of in e2e specs, which are precisely crafted for that. McDoniel's answer pointed me to the right direction.
I develop a NestJS entrypoint, looking like that:
#Post()
async doStuff(#Body() dto: MyDto): Promise<string> {
// some code...
}
I use class-validator so that when my API receives a request, the payload is parsed and turned into a MyDto object, and validations present as annotations in MyDto class are performed. Note that MyDto has an array of nested object of class MySubDto. With the #ValidateNested and #Type annotations, the nested objects are also validated correctly.
This works great.
Now I want to write tests for the performed validations. In my .spec file, I write:
import { validate } from 'class-validator';
// ...
it('should FAIL on invalid DTO', async () => {
const dto = {
//...
};
const errors = await validate( dto );
expect(errors.length).not.toBe(0);
}
This fails because the validated dto object is not a MyDto. I can rewrite the test as such:
it('should FAIL on invalid DTO', async () => {
const dto = new MyDto()
dto.attribute1 = 1;
dto.subDto = { 'name':'Vincent' };
const errors = await validate( dto );
expect(errors.length).not.toBe(0);
}
Validations are now properly made on the MyDto object, but not on my nested subDto object, which means I will have to instantiate aaaall objects of my Dto with according classes, which would be much inefficient. Also, instantiating classes means that TypeScript will raise errors if I voluntarily omits some required properties or indicate incorrect values.
So the question is:
How can I use NestJs built-in request body parser in my tests, so that I can write any JSON I want for dto, parse it as a MyDto object and validate it with class-validator validate function?
Any alternate better-practice ways to tests validations are welcome too!
Although, we should test how our validation DTOs work with ValidationPipe, that's a form of integration or e2e tests. Unit tests are unit tests, right?! Every unit should be testable independently.
The DTOs in Nest.js are perfectly unit-tastable. It becomes necessary to unit-test the DTOs, when they contain complex regular expressions or sanitation logic.
Creating an object of the DTO for test
The request body parser in Nest.js that you are looking for is the class-transformer package. It has a function plainToInstance() to turn your literal or JSON object into an object of the specified type. In your example the specified type is the type of your DTO:
const myDtoObject = plainToInstance(MyDto, myBodyObject)
Here, myBodyObject is your plain object that you created for test, like:
const myBodyObject = { attribute1: 1, subDto: { name: 'Vincent' } }
The plainToInstance() function also applies all the transformations that you have in your DTO. If you just want to test the transformations, you can assert after this statement. You don't have to call the validate() function to test the transformations.
Validating the object of the DTO in test
To the emulate validation of Nest.js, simply pass the myDtoObject to the validate() function of the class-validator package:
const errors = await validate(myDtoObject)
Also, if your DTO or SubDTO object is too big or too complex to create, you have the option to skip the remaining properties or subObjects like your subDto:
const errors = await validate(myDtoObject, { skipMissingProperties: true })
Now your test object could be without the subDto, like:
const myBodyObject = { attribute1: 1 }
Asserting the errors
Apart from asserting that the errors array is not empty, I also like to specify a custom error message for each validation in the DTO:
#IsPositive({ message: `Attribute1 must be a positive number.` })
readonly attribute1: number
One advantage of a custom error message is that we can write it in a user-friendly way instead of the generic messages created by the library. Another big advantage is that I can assert this error message in my tests. This way I can be sure that the errors array is not empty because it contains the error for this particular validation and not something else:
expect(stringified(errors)).toContain(`Attribute1 must be a positive number.`)
Here, stringified() is a simple utility function to convert the errors object to a JSON string, so we can search our error message in it:
export function stringified(errors: ValidationError[]): string {
return JSON.stringify(errors)
}
Your final test code
Instead of the controller.spec.ts file, create a new file specific to your DTO, like my-dto.spec.ts for unit tests of your DTO. A DTO can have plenty of unit tests and they should not be mixed with the controller's tests:
it('should fail on invalid DTO', async () => {
const myBodyObject = { attribute1: -1, subDto: { name: 'Vincent' } }
const myDtoObject = plainToInstance(MyDto, myBodyObject)
const errors = await validate(myDtoObject)
expect(errors.length).not.toBe(0)
expect(stringified(errors)).toContain(`Attribute1 must be a positive number.`)
}
Notice how you don't have to assign the values to the properties one by one for creating the myDtoObject. In most cases, the properties of your DTOs should be marked readonly. So, you can't assign the values one by one. The plainToInstance() to the rescue!
That's it! You were almost there, unit testing your DTO. Good efforts! Hope that helps now.
To test input validation with the validation pipes, I think it is agreed that the best place to do this is in e2e tests rather than in unit tests, just make sure that you remember to register your pipes (if you normally use app.useGlobalPipes() instead of using dependency injection)
Related
I have a specific use case where a user’s data sources are conditional - e.g based on the data sources saved in the database for every specific user.
This also means every data source has unique credentials for every user, which is fine for RESTDataSource because I can use the willSendRequest to set the Authentication headers before each request.
However, I have custom data sources that have proprietary clients (for example JSForce for Salesforce) - and they have their own fetch mechanism.
As of now - I have a custom transformer directive that fetches the tokens from the database and adds it into the context - however, the directive is ran before the dataSource.initialize() method - so that I can’t use the credentials there because the context still doesn’t have it.
I also don’t want to initialize all data sources for every user even if he doesn’t use said data source in this request - but the dataSources() function doesn’t accept any parameter and is not contextual.
Bottom line is - is it possible to pass data sources conditionally based even on the Express request? When is the right time to pass the tokens and credentials to the dataSource? Maybe add my own custom init function and call it from the directive?
So you have options. Here are 2 choices:
1. Just add your dataSources
If you just initialize all dataSources, internally it can check to see if the user has access. You could have a getClient function that resolves on the client or throws an UnauthorizedError, depending.
2. Don't just add your dataSources
So if you really don't want to initialize the dataSources at ALL, you can absolutely do this by adding the "dataSources" yourself, just like Apollo does it.
const server = new ApolloServer({
// this example uses apollo-server-express
context: async ({ req, res }) => {
const accessToken = req.headers?.authorization?.split(' ')[1] || ''
const user = accessToken && buildUser(accessToken)
const context = { user }
// You can't use the name "dataSources" in your config because ApolloServer will puke, so I called them "services"
await addServices(context)
return context
}
})
const addServices = async (context) => {
const { user } = context;
const services = {
userAPI: new UserAPI(),
postAPI: new PostAPI(),
}
if (user.isAdmin) {
services.adminAPI = new AdminAPI()
}
const initializers = [];
for (const service of Object.values(services)) {
if (service.initialize) {
initializers.push(
service.initialize({
context,
cache: null, // or add your own cache
})
);
}
}
await Promise.all(initializers);
/**
* this is where you have to deviate from Apollo.
* You can't use the name "dataSources" in your config because ApolloServer will puke
* with the error 'Please use the dataSources config option instead of putting dataSources on the context yourself.'
*/
context.services = services;
}
Some notes:
1. You can't call them "dataSources"
If you return a property called "dataSources" on your context object, Apollo will not like it very much [meaning it throws an Error]. In my example, I used the name "services", but you can do whatever you want... except "dataSources".
With the above code, in your resolvers, just reference context.services.whatever instead.
2. This is what Apollo does
This pattern is copied directly from what Apollo already does for dataSources [source]
3. I recommend you still treat them as DataSources
I recommend you stick to the DataSources pattern and that your "services" all extend DataSource. It's going to be easier for everyone involved.
4. Type safety
If you're using TypeScript or something, you're going to lose a bit of type safety, since the context.services is either going to be one shape or another. Even if you're not, if you're not careful, you may end up throwing "Cannot read property users of undefined" errors instead of "Unauthorized" errors. You might be better off creating "dummy services" that reflect the same object shape but just throw Unauthorized.
I have created a decorator to retrieve the graphql query context and do some logic with it. It looks like this:
export const GraphQlProjections = (options?: ProjectionOptions) =>
createParamDecorator<ProjectionOptions, ExecutionContext, string[]>(
(opts: ProjectionOptions, ctx: ExecutionContext) => {
const gqlContext = GqlExecutionContext.create(ctx)
const info = gqlContext.getInfo()
const fields = Object.keys(fieldsProjection(info, opts))
return fields
}
)(options)
I'd like to write some unit tests for this - but I have no idea how to even go about it.
I've found some documentation for getting the decorator factory, but this does not help with setting up/mocking execution context to allow. Apollo server docs seem to reference some sort of mocking, but doesn't tell me how to achieve my goal.
I basically need to say 'Given this query, say, Query { user { name } }, what will my decorator return?'
To achieve this, it seems i need to somehow mock the execution context to contain a GraphQLResolveInfo object, which I also somehow need to generate. How can I achieve this? Or am i going about this the wrong way?
What's the standard means of processing input to an AWS Lambda handler function, when the format of the incoming JSON varies depending on the type of trigger?
e.g. I have a Lambda function that gets called when an object is created in an S3 bucket, or when an hourly scheduled event fires. Obviously, the JSON passed to the handler is formatted differently.
Is it acceptable to overload Lambda handler functions, with the input type defined as S3Event for one signature and ScheduledEvent for the other? If not, are developers simply calling JsonConvert.DeserializeObject in try blocks? Or is the standard practice to establish multiple Lambda functions, one for each input type (yuck!)?
You should use one function per event.
Having multiple triggers for one Lambda will just make things way harder, as you'll end up with a bunch of if/else, switch statements or even Factory methods if you want to apply design patterns.
Now think of Lambda functions as small and maintainable. Think of pieces of code that should do one thing and should do it well. By the moment you start having multiple triggers, you kind of end up with a "Lambda Monolith", as it will have way too many responsibilities. Not only that, you strongly couple your Lambda functions with your events, meaning that once a new trigger is added, your Lambda code should change. This is just not scalable after two or three triggers.
Another drawback is that you are bound to using one language only if you architect it like that. For some use cases, Java may be the best option. But for others, it may be Node JS, Python, Go...
Essentially, your functions should be small enough to be easily maintainable and even rewritten if necessary. There's absolutely nothing wrong with creating one function per event, although, apparently, you strongly disapprove it. Think of every Lambda as a separate Microservice, which scales out independently, has its own CI/CD pipeline and its own suite of tests.
Another thing to consider is if you want to limit your Lambda concurrent executions depending on your trigger type. This would be unachievable via the "One-Lambda-Does-It-All" model.
Stick with one Lambda per trigger and you'll sleep better at night.
This is actually possible by doing the following:
Have the Lambda signature take a Stream rather than the Amazon event type, so we can get the raw JSON message.
Read the JSON contents of the stream as a string.
Deserialize the string to a custom type in order to identify the event source.
Use the event source information to deserialize the JSON a second time to the appropriate type for the event source.
For example:
public async Task FunctionHandler(Stream stream, ILambdaContext context)
{
using var streamReader = new StreamReader(stream);
var json = await streamReader.ReadToEndAsync();
var serializationOptions = new JsonSerializationOptions { PropertyNameCaseInsensitive = true };
var awsEvent = JsonSerializer.Deserialize<AwsEvent>(json, serializationOptions);
var eventSource = awsEvent?.Records.Select(e => e.EventSource).SingleOrDefault();
await (eventSource switch
{
"aws:s3" => HandleAsync(Deserialize<S3Event>(json, serializationOptions), context),
"aws:sqs" => HandleAsync(Deserialize<SQSEvent>(json, serializationOptions), context),
_ => throw new ArgumentOutOfRangeException(nameof (stream), $"Unsupported event source '{eventSource}'."),
});
}
public async Task HandlyAsync(S3Event #event) => ...
public async Task HandleAsync(SQSEvent #event) => ...
public sealed class AwsEvent
{
public List<Record> Records { get; set; }
public sealed class Record
{
public string EventSource { get; set; }
}
}
I am wondering what is the use of asObservable:
As per docs:
An observable sequence that hides the identity of the
source sequence.
But why would you need to hide the sequence?
When to use Subject.prototype.asObservable()
The purpose of this is to prevent leaking the "observer side" of the Subject out of an API. Basically to prevent a leaky abstraction when you don't want people to be able to "next" into the resulting observable.
Example
(NOTE: This really isn't how you should make a data source like this into an Observable, instead you should use the new Observable constructor, See below).
const myAPI = {
getData: () => {
const subject = new Subject();
const source = new SomeWeirdDataSource();
source.onMessage = (data) => subject.next({ type: 'message', data });
source.onOtherMessage = (data) => subject.next({ type: 'othermessage', data });
return subject.asObservable();
}
};
Now when someone gets the observable result from myAPI.getData() they can't next values in to the result:
const result = myAPI.getData();
result.next('LOL hax!'); // throws an error because `next` doesn't exist
You should usually be using new Observable(), though
In the example above, we're probably creating something we didn't mean to. For one, getData() isn't lazy like most observables, it's going to create the underlying data source SomeWeirdDataSource (and presumably some side effects) immediately. This also means if you retry or repeat the resulting observable, it's not going to work like you think it will.
It's better to encapsulate the creation of your data source within your observable like so:
const myAPI = {
getData: () => return new Observable(subscriber => {
const source = new SomeWeirdDataSource();
source.onMessage = (data) => subscriber.next({ type: 'message', data });
source.onOtherMessage = (data) => subscriber.next({ type: 'othermessage', data });
return () => {
// Even better, now we can tear down the data source for cancellation!
source.destroy();
};
});
}
With the code above, any behavior, including making it "not lazy" can be composed on top of the observable using RxJS's existing operators.
A Subject can act both as an observer and an observable.
An Obervable has 2 methods.
subscribe
unsubscribe
Whenever you subscribe to an observable, you get an observer which has next, error and complete methods on it.
You'd need to hide the sequence because you don't want the stream source to be publicly available in every component. You can refer to #BenLesh's example, for the same.
P.S. : When I first-time came through Reactive Javascript, I was not able to understand asObservable. Because I had to make sure I understand the basics clearly and then go for asObservable. :)
In addition to this answer I would mention that in my opinion it depends on the language in use.
For untyped (or weakly typed) languages like JavaScript it might make sense to conceal the source object from the caller by creating a delegate object like asObservable() method does. Although if you think about it it won't prevent a caller from doing observable.source.next(...). So this technique doesn't prevent the Subject API from leaking, but it indeed makes it more hidden form the caller.
On the other hand, for strongly typed languages like TypeScript the method asObservable() doesn't seem to make much sense (if any).
Statically typed languages solve the API leakage problem by simply utilizing the type system (e.g. interfaces). For example, if your getData() method is defined as returning Observable<T> then you can safely return the original Subject, and the caller will get a compilation error if attempting to call getData().next() on it.
Think about this modified example:
let myAPI: { getData: () => Observable<any> }
myAPI = {
getData: () => {
const subject = new Subject()
// ... stuff ...
return subject
}
}
myAPI.getData().next() // <--- error TS2339: Property 'next' does not exist on type 'Observable<any>'
Of course, since it all compiles to JavaScript in the end of the day there might still be cases when you want to create a delegate. But my point is that the room for those cases is much smaller then when using vanilla JavaScript , and probably in majority of cases you don't need that method.
(Typescript Only) Use Types Instead of asObservable()
I like what Alex Vayda is saying about using types instead, so I'm going to add some additional information to clarify.
If you use asObservable(), then you are running the following code.
/**
* Creates a new Observable with this Subject as the source. You can do this
* to create customize Observer-side logic of the Subject and conceal it from
* code that uses the Observable.
* #return {Observable} Observable that the Subject casts to
*/
asObservable(): Observable<T> {
const observable = new Observable<T>();
(<any>observable).source = this;
return observable;
}
This is useful for Javascript, but not needed in Typescript. I'll explain why below.
Example
export class ExampleViewModel {
// I don't want the outside codeworld to be able to set this INPUT
// so I'm going to make it private. This means it's scoped to this class
// and only this class can set it.
private _exampleData = new BehaviorSubject<ExampleData>(null);
// I do however want the outside codeworld to be able to listen to
// this data source as an OUTPUT. Therefore, I make it public so that
// any code that has reference to this class can listen to this data
// source, but can't write to it because of a type cast.
// So I write this
public exampleData$ = this._exampleData as Observable<ExampleData>;
// and not this
// public exampleData$ = this._exampleData.asObservable();
}
Both do the samething, but one doesn't add additional code calls or memory allocation to your program.
❌this._exampleData.asObservable();❌
Requires additional memory allocation and computation at runtime.
✅this._exampleData as Observable<ExampleData>;✅
Handled by the type system and will NOT add additional code or memory allocation at runtime.
Conclusion
If your colleagues try this, referenceToExampleViewModel.exampleData$.next(new ExampleData());, then it will be caught at compile time and the type system won't let them, because exampleData$ has been casted to Observable<ExampleData> and is no longer of type BehaviorSubject<ExampleData>, but they will be able to listen (.subscribe()) to that source of data or extend it (.pipe()).
This is useful when you only want a particular service or class to be setting the source of information. It also helps with separating the input from the output, which makes debugging easier.
I am writing unittest for void method actually that method load the collection in
ViewData["CityList"] method is
public void PopulateCityCombo() {
IEnumerable<Cities> c= service.GetCities();
ViewData["CityList"] = c.Select(e => new Cities{ ID = e.ID, Name = e.Name});
}
now i do not know how to unit test using Moq since controller method is void and not returning data, can any one tell i will achive that.
On a side note, I would shy away from using ViewData within controller methods as per your example. The ViewData dictionary approach is fast and fairly easy to implement, however it can lead to typo's and errors that are not caught at compile time. An alternative would be to use the ViewModel pattern which allows you to use strongly-typed classes for the specific view you need to expose values or content within. Ultimately giving you type safe and compile time checking along with intellisense.
Switching to the ViewModel pattern would allow you to call the PopulateCityCombo() method from your controller to populate a ViewModel that in turn would passed to the corresponding view.
From there you would need to inject a mock service layer into your controllers constructor from your unit test.
// arrange
var mock = new Mock<servicelayer>();
mock.Setup(x=>x.GetCities()).Returns(expectedData);
var controller = new YourController(mock.Object);
// act
var result = controller.ControllerMethod() as ViewResult;
var resultData = (YourViewModel)result.ViewData.Model;
// assert
// Your assertions