NestJS Microservice Exception handling - microservices

I have setup a Microservice Architecture that looks the following:
api-gateway (NestFactory.create(AppModule);)
service 1 (NestFactory.createMicroservice<MicroserviceOptions>)
service 2 (NestFactory.createMicroservice<MicroserviceOptions>)
...
A Service looks like this:
service.controller.ts
service.handler.ts
Where handler is like a Service in a typical Monolith that handles the logic.
Currently, I am catching Exceptions the following way:
The handler makes a call to the database and fails due to a duplicated key (i.e. email).
I catch this exception and convert it to an RpcException
In the ApiGateway I catch the RpcException like so:
return new Promise<Type>((resolve, reject) => {
this.clientProxy
.send<Type>('MessagePattern', { dto: DTO })
.subscribe(resolve, (err) => {
logger.error(err);
reject(err);
});
});
Again I have to catch the rejected Promise and throw an HttpException to have the ExceptionFilter sending a proper error response. Throwing an Error inside the Promise instead of rejecting it doesn't work)
So basically, I have 3 TryCatch Blocks for 1 Exception.
This looks very verbose to me.
Is there any better way or best practice when it comes to NestJS Microservices?
Can we have an Interceptor for the rebound messages received by this.clientProxy.send and pipe it to send send the error response to the client without catching it 2 times explicitly?

Not a complete answer to your question, but it's better than nothing. :)
I try to avoid the .subscribe, reject(...) approach whenever possible.
Even though the send method returns an Observable, in most cases you expect only 1 response. So, in most cases .toPromise() makes sense.
Once it's a promise, you can use the async-await syntax, and you don't have callbacks and you can effectively just catch all exceptions (and rethrow if you want to). It helps a bit.
try {
const payload = { dto: DTO };
const response = await this.clientProxy.send<Type>('MessagePattern', payload).toPromise();
} catch (err) {
this.logger.error(err);
}
On the server side, you can effectively define Interceptors, which are almost identical to the Interceptors you would use for API controllers.
#MessagePattern(messagePattern)
#UseInterceptors(CatchExceptionInterceptor)
public async someMethod(...) { }
You should implement the NestInterceptor interface:
#Injectable()
export class CatchExceptionInterceptor implements NestInterceptor {
intercept(context: ExecutionContext, stream$: Observable<any>): Observable<any> {
return stream$.pipe(
catchError(...)
);
}
}

Related

NestJS handle HttpModule errors

NestJS has the following example for using their HttpModule:
#Injectable()
export class CatsService {
constructor(private readonly httpService: HttpService) {}
findAll(): Observable<AxiosResponse<Cat[]>> {
return this.httpService.get('http://localhost:3000/cats');
}
}
My question is, how does the client code (most likely a Controller) handle this response? How does it treat Observables so that Cat[] may be accessed. Or what if the Http request throws an error such as a 404.
How does a NestJS client (Controller) in this case interact with the findAll() method provided by the service?
I am not too familiar with NestJs, but if you want to run an observable http request generally you do the following to consume and catchError
this.catService.findAll().pipe(
// axio wraps the result in data
map(res=>res.data),
catchError(e=>{
...handle error 404
return of(e)
})).subscribe()
If, for whatever reason, the URL you provide to the HttpService returns a 404, that error will propagate back through the service, to the controller, and eventually to the client that called the original URL. Under the hood, NestJS will subscribe to all returned Observables so that you don't need to worry about it, you can just return the call directly from your Controller. So in the example above, say we have a controller that looks like this:
#Controller('cats')
export class CatsController {
constructor(private readonly catsService: CatsService) {}
#Get()
findAllCats(): Observable<Cat[]> {
return this.catsService.findAll();
}
}
And CatsService looks like this
#Injectable()
export class CatsService {
constructor(private readonly httpService: HttpService) {}
findAll(): Observable<Cat[]> {
return this.httpService.get('http://localhost:3000/cats');
}
}
Assuming you are calling off to another server (i.e. this server is not running on port 3000) and /cats is not a valid endpoint and that server returns a 404 to you, the httpService's response will bubble up through the CatsService to the CatsController where NestJS will handle the subscription, and send the response back to the client. If you are looking to do some custom error handling, that will need to be handled in a different way. A great way to test how the HttpService responds to things is to create a simple endpoint and call off to a bad URL (like https://google.com/item which is a 404)

Mono returned by ServerRequest.bodyToMono() method not extracting the body if I return ServerResponse immediately

I am using web reactive in spring web flux. I have implemented a Handler function for POST request. I want the server to return immediately. So, I have implemeted the handler as below -:
public class Sample implements HandlerFunction<ServerResponse>{
public Mono<ServerResponse> handle(ServerRequest request) {
Mono bodyMono = request.bodyToMono(String.class);
bodyMono.map(str -> {
System.out.println("body got is " + str);
return str;
}).subscribe();
return ServerResponse.status(HttpStatus.CREATED).build();
}
}
But the print statement inside the map function is not getting called. It means the body is not getting extracted.
If I do not return the response immediately and use
return bodyMono.then(ServerResponse.status(HttpStatus.CREATED).build())
then the map function is getting called.
So, how can I do processing on my request body in the background?
Please help.
EDIT
I tried using flux.share() like below -:
Flux<String> bodyFlux = request.bodyToMono(String.class).flux().share();
Flux<String> processFlux = bodyFlux.map(str -> {
System.out.println("body got is");
try{
Thread.sleep(1000);
}catch (Exception ex){
}
return str;
});
processFlux.subscribeOn(Schedulers.elastic()).subscribe();
return bodyFlux.then(ServerResponse.status(HttpStatus.CREATED).build());
In the above code, sometimes the map function is getting called and sometimes not.
As you've found, you can't just arbitrarily subscribe() to the Mono returned by bodyToMono(), since in that case the body simply doesn't get passed into the Mono for processing. (You can verify this by putting a single() call in that Mono, it'll throw an exception since no element will be emitted.)
So, how can I do processing on my request body in the background?
If you really still want to just use reactor to do a long task in the background while returning immediately, you can do something like:
return request.bodyToMono(String.class).doOnNext(str -> {
Mono.just(str).publishOn(Schedulers.elastic()).subscribe(s -> {
System.out.println("proc start!");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("proc end!");
});
}).then(ServerResponse.status(HttpStatus.CREATED).build());
This approach immediately publishes the emitted element to a new Mono, set to publish on an elastic scheduler, that is then subscribed in the background. However, it's kind of ugly, and it's not really what reactor is designed to do. You may be misunderstanding the idea behind reactor / reactive programming here:
It's not written with the idea of "returning a quick result and then doing stuff in the background" - that's generally the purpose of a work queue, often implemented with something like RabbitMQ or Kafka. It's "raison d'ĂȘtre" is instead to be non-blocking, so a single thread is never idly blocked, waiting for something else to complete.
The map() method isn't designed for side effects, it's designed to transform each object into another. For side effects, you want doOnNext() instead;
Reactor uses a single thread by default, so your "additional processing" in your map() method would still block that thread.
If your application is for anything more than quick demo purposes, and/or you need to make heavy use of this pattern, then I'd seriously consider setting up a proper work queue instead.
This is not possible.
Web servers (including Reactor Netty, Tomcat, etc) clean up and recycle resources when request processing is done. This means that when your controller handler is done, the HTTP resources, the request itself, reusable buffers, etc are recycled or closed. At that point, you cannot read from the request body anymore.
In your case, you need to read and buffer the whole request body first, then return a response and kick off a task for processing that request in a separate execution.

Handle data after http get request in angular

I have a service that requests data from a get method, I'd like to map the response to an object storing some Ids and use those Ids to make other http requests.
I was told this isn't usually done in a callback manner, I looked at this How do I return the response from an asynchronous call? but I don't think it's the usual way to implement services, any hints are very appreciated.
Tried adding in onInit/constructor method in angular to be sure the object was filled before other methods were called without success.
#Injectable ()
export class ContactService {
storeIds;
getIds(callback: Function) {
this.http.get<any>(IdsUrl, Config.options).subscribe(res => {
callback(response);
});
getIds(res => {
this.storeIds = {
profileId: res.profile,
refIds: res.refIds
}
}
)
// this.storeIds returns undefined as it's an async call
this.http.post<any>(WebserviceUrl + this.storeIds.profileId , data, headers )
// .....Many other web services that relay on this Ids
}
Just create another service called StoreIdsService. Update the response you get from your first api call 'getIds' in the StoreIdsService. The idea is to have StoreIdsService as singleton service to keep state of your storeIds. You can inject StoreIdsService in anywhere component you want to get the storeIds.
Its one of manyways to share data in angular between components.
Please refer to this answer someone has posted.
How do I share data between components in Angular 2?
You can simply assign the service response to the storeIds property inside the subscribe method. and call the subsequent services inside it if you need.
#Injectable ()
export class ContactService {
storeIds;
getIds() {
this.http.get<any>(IdsUrl, Config.options).subscribe(res => {
this.storeIds = {
profileId: response.profile,
refIds: response.refIds
}
this.otherapicall1();
this.otherapicall2();
});
}

rxjs operator to define logic after subscribe call

const source = Rx.Observable.of(1);
const example = source
.do(val => console.log('do called'));
example.subscribe(val => console.log('subscribe called'));
//Output :
do called
subscribe called
This exemple shows that do is executed before subscribe.
Which operator do I need to use to define logic after subscribe is executed ?
I need this to define logic one time and that must be executed after each subscribe call that helps also to respect SRP (Single responsibility Principle) an example is to handle caching logic in interceptor using some kind of specific operator that I am looking for and subscribe in services
The way I handle an Interceptor is as follows, it may help if I understand your requirements correctly.
...
private interceptor(observable: Observable<Response>): Observable<Response> {
return observable
.map(res => {
return res;
})
.catch((err) => {
//handle Specific Error
return Observable.throw(err);
})
.finally(() => {
//After the request;
console.info("After the Request")
});
}
protected get(req: getHttpParams): Observable<Response> {
return this.interceptor(this.httpClient.get(`${path}/${String(req.id)}`, req.options));
}
...
I would also recommend taking a look at Angular 5's in-built interceptor for http requests specifically

Can an Owin Middleware return a response earlier than the Invoke method returns?

I have the following middleware code:
public class UoWMiddleware : OwinMiddleware
{
readonly IUoW uow;
public UoWMiddleware(OwinMiddleware next, IUoW uow) : base(next)
{
this.uow = uow;
}
public override async Task Invoke(IOwinContext context)
{
try
{
await Next.Invoke(context);
}
catch
{
uow.RollBack();
throw;
}
finally
{
if (uow.Status == Base.SharedDomain.UoWStatus.Running)
{
var response = context.Response;
if (response.StatusCode < 400)
{
Thread.Sleep(1000);
uow.Commit();
}
else
uow.RollBack();
}
}
}
}
Occasionally we observe that the response returns to client before calling uow.Commit() via fiddler. For example we put a break point to uow.Commit and we see the response is returned to client despite that we are on the breakpoint waiting. This is somewhat unexpected. I would think the response will strictly return after the Invoke method ends. Am I missing something?
In Owin/Katana the response body (and, of course, the headers) are sent to the client at the precise moment when a middleware calls Write on the Response object of the IOwinContext.
This means that if your next middleware is writing the response body your client will receive it before your server-side code returns from the call to await Next.Invoke().
That's how Owin is designed, and depends on the fact that the Response stream may be written just once in a single Request/Response life-cycle.
Looking at your code, I can't see any major problem in such behavior, because you are simply reading the response headers after the response is written to the stream, and thus not altering it.
If, instead, you require to alter the response written by your next middleware, or you strictly need to write the response after you execute further logic server-side, then your only option is to buffer the response body into a memory stream, and than copy it into the real response stream (as per this answer) when you are ready.
I have successfully tested this approach in a different use case (but sharing the same concept) that you may find looking at this answer: https://stackoverflow.com/a/36755639/3670737
Reference:
Changing the response object from OWIN Middleware

Resources