planing to develop a application with micro services using NestJS. i want to know how to apply Circuit Breakers to the NestJS application. can someone give me a hint/support/ any sample code or whatever resource which helpful for me
Circuit breaking is a pattern that applies to service-to-service requests and doesn't have a whole lot to do with NestJS in particular. If you plan on instrumenting several NestJS APIs that communicate with one another, I'd suggest you look to configure circuit breakers inside the load balancers or service mesh. Here's a few links that can help you explore how do to that with Nginx, Hashicorp Consul, and AWS AppMesh respectively:
https://www.nginx.com/blog/microservices-reference-architecture-nginx-circuit-breaker-pattern/
https://learn.hashicorp.com/tutorials/consul/service-mesh-circuit-breaking?in=consul/developer-mesh
https://github.com/aws/aws-app-mesh-roadmap/issues/6#issuecomment-694598440
We did this for a project a few years ago where we wrapped circuit breakers with a Decorator used on Nest. https://github.com/valor-software/nest-circuit-breaker
This allowed us to use patterns like this
#CircuitBreakerProtected({
circuitBreakerSleepWindowInMilliseconds: 3000,
circuitBreakerErrorThresholdPercentage: 50,
circuitBreakerRequestVolumeThreshold: 10,
timeout: 10000,
statisticalWindowLength: 10000,
statisticalWindowNumberOfBuckets: 10,
percentileWindowLength: 10000,
percentileWindowNumberOfBuckets: 10,
requestVolumeRejectionThreshold: 0,
fallbackTo: undefined,
shouldErrorBeConsidered: undefined
})
Related
I am building microservices architecture for the first time and despite I have read a lot of articles I am still confused how to correctly implement circuit breaker.
Let's suppose that I have several microservices that call each other. So I implemented circuit breaker into each of them as an request interceptor and it works. But I dont like it.
Firstly each service now needs to hit the fail threshold separately before the breaker open. Secondly I write the same functionality for each service over and over again.
So my first thought was to create circuit breaker as stand alone service but I can not find any pattern describing such a functionality. How it would works? Every service before making request calls circuit breaker service firs if target circuit is closed. If so it sends request and when request is finished then reports back to circuit breaker service whether the request was successful or failed?
Or how should be circuit breaker correctly put into microservices architecture?
When you are talking about real micro services architecture circuit breaking is a cross-cutting-concern
You should not implement it by yourself. First of all I should say please be careful of creating spaghetti between your micro services, It's too dangerous and anti-pattern.
Although its an anti-pattern I highly recommend you to use cloud native platforms to deploy your micro-services like Kubernetes or mabye Docker.
There are lots of useful tools like Envoy-implemented side-cars, service mesh implementations using Istio (not recommended), Consul and other Hashicorp products.
You can improve your service discovery, observability, monitoring, circuit-breaking, logging, side-micro-service-communication and other useful concepts using cloud native tools.
Hint: I highly recommend you to use grpc instead of http requests between your services (To reduce latency based on http3 and tcp connections)
Secondly I write the same functionality for each service over and
over again.
One of ways to address this issue in the world of microservices is (as you correctly noticed) to have this functionality moved away of your service. Circuit breaking is just one element, there is many many more other aspects, related to inter-service communication, that you'd have to take care of, such as: handling retries, failovers, authentication and authorization, tracing, monitoring etc.
If you were to handle it in all services separately, you'd end up writing the same code (or configuring various frameworks/plugins) over and over again.
The solution that emerged from that need is a service mesh. You can think of it as a middle-man that intercept all the communication between your services and taking care of all above mentioned aspects.
There are various solutions. You can check https://github.com/cncf/landscape to find out what is now "hot" and considered a standard.
I'd however recommend you getting familiar with the https://istio.io/latest/about/service-mesh/ as it's really mature and powerful.
I am stuck in choosing One API gateway from the three API gateways mentioned below:
KrakenD (https://www.krakend.io/)
Kong (https://konghq.com/kong/)
Spring Cloud Gateway (https://cloud.spring.io/spring-cloud-gateway/reference/html/)
My requirements are:
Good performance and must have majority of the API gateway features.
Supports aggregating data from two Different Micro-services API's.
All the three of them, looks good from the feature list and the performance wise.
I am thinking of relaxing the second requirement, as I am not sure, whether that is a good practice or not.
API Gateway is a concept that is used in all kind of products, I really think the industry should start sub-categorizing these products as most of them are completely different from each other.
I'll try to summarize here the main highlights according to your requirements.
Both Kong and KrakenD offer the "majority" of API gateway functionalities. Although the word is fuzzy, at least all of them cover stuff like routing, rate limiting, authorization, and such.
Kong
Kong is basically an Nginx proxy that adds a lot of functionality on top of it using Lua.
When using Kong your endpoints have a 1:1 relationship with your backends. Meaning that you declare an endpoint in Kong that exposes data from one backend, and does the magic in the middle (authorization, limiting, etc). This magic is the essence of Kong and is based on Lua plugins (unfortunately, these are not written in C as Nginx is).
If you want to aggregate data from several backends into one single endpoint, Kong does not fit in your scenario.
Finally, Kong is stateful (it's impressive how they try to sell it the other way around, but this is out of the scope of this question). The configuration lives inside a database, and changes to the configuration are through an API that ends up modifying its internal Postgres or equivalent.
Performance is also inevitably linked to the existence of this database (and Lua), and going multi-region can be a real pain.
Kong functionality can be extended with Lua code.
In summary:
Proxy with cross cutting concerns
Nodes require coordination and synchronization
Mutable configuration
The database is the source of truth
More pieces, more complexity
Multi-region lag
Requires powerful hardware to run
Customizations in Lua
KrakenD
KrakenD is a service written from the ground up using Go, taking advantage of the language features for concurrency, speed, and small footprint. In terms of performance, this is the winning racehorse.
KrakenD's natural positioning is as a Gateway with aggregation. It's meant to connect lots of backend services to a single endpoint. It's mostly adopted by companies for feeding Mobile applications, Webapps and other clients. It implements the pattern Backend for Frontend, allowing you to define exactly and with a declarative configuration how is the API that you want to expose to the clients. You can choose which fields are taken from responses, aggregate them, validate them, transform them, etc.
KrakenD is stateless, you version your API the same way you do with the rest of the code, using git. And you deploy it in the same way you do with your application (e.g: a CI/CD pipeline that pushes a new container with the new configuration and is rolled out). As everything is on the config, there is no need to have a central database, neither nodes need communication with each other.
As per the customizations, with KrakenD you can create middlewares, plugins or just scripting in several languages: Go, Lua, Common Expression Language (CEL) -sort of JS- and Martian DSL.
In summary:
On the-fly API creation using upstream services, with cross-cutting concerns (api gateway).
Not a proxy, although it can be used as one.
No node coordination
No synchronization needed
Zero complexity (docker container with a configuration file)
No challenges for Multi-region
Declarative configuration
Immutable infrastructure
Runs on micro and small machines in production without issues.
Customizations in Go, Lua, CEL, and Martian DSL
Spring Cloud Gateway
(As well as Zuul) is used mostly by Java developers that want to stick in the JVM space. I am less familiar with this one, but it's design is also for proxying to existing services, adds also the cross-concerns of the API gateway.
I see it more as a framework that you use to deliver your API. With this product you need to code the transformations yourself in Java. The included gateway functionalitites are declarative as well.
--
I am hoping this sheds some light
My only and major blocker from using Kong for me is you can only extend Kong with LUA. Only small percentages of developers in the world familiar with LUA. That's why I choose KrakenD.
I am learning AWS lambda and have a basic question regarding architecture with respect to managing https calls from multiple lambda functions to a single external service.
The external service will only process 3 requests per second from any IP address. Since I have multiple asynchronous lambdas I cannot be sure I will be below this threshold. I also don't know what IPs my lambdas use or even if they are the same or not.
How should this be managed?
I was thinking of using an SQS FIFO queue, but I would need to setup a bidirectional system to get the call responses back to the appropriate lambda. I think there must be a simple solution to this, but I'm just not familiar enough yet.
What would you experts suggest?
If I am understanding your question correctly then
You can create and API Endpoint by build an API Gateway with Lambda integrations(preferred Lambda proxy integration) and then use throttling option to decide the throughput this can be done in different ways aws docs account level, method level etc.
You can perform some load testing using gatling or any other tool and then generate a report for eg. which can show that even if you have say 6tps on your site you can throttle at method level and see that the external service is hit only at say 3tps.
It would depend upon your architecture how do you want to throttle I had done method level to protect the external service at 8tps.
For microservices, the common design pattern used is API-Gateway. I am a bit confused about its implementation and implications. My questions/concerns are as follows:
Why other patterns for microservices are not generally discussed? If they are, then did I miss them out?
If we deploy a gateway server, isn't it a bottleneck?
Isn't the gateway server vulnerable to crashes/failures due to excessive requests at a single point? I believe that the load would be enormous at this point (and keeping in mind that Netflix is doing something like this). Correct me if I am wrong in understanding.
Stream/download/upload data (like files, videos, images) will also be passing through the gateway server with other middleware services?
Why can't we use the proxy pattern instead of Gateway?
From my understanding, in an ideal environment, a gateway server would be entertaining the requests from clients and responding back after the Microservices has performed the due task.
Additionally, I was looking at Spring Cloud Gateway. It seems to be something that I am looking for in a gateway server but the routing functionality of it confuses me if it's just a routing (redirect) service and the microservice would be directly responsible for the response to the client.
The gateway pattern is used to provide a single interface to a bunch of different microservices. If you have multiple microservices providing data for your API, you don't want to expose all of these to your clients. Much better for them to have just a single point of entry, without having to think about which service to poll for which data. It's also nice to be able to centralise common processing such as authentication. Like any design pattern, it can be applied very nicely to some solutions and doesn't work well for others.
If throughput becomes an issue, the gateway is very scalable. You can just add more gateways and load balance them
There are some subtle differences between proxy pattern and API gateway pattern. I recommend this article for a pretty straightforward explanation
https://blog.akana.com/api-proxy-or-gateway/
In the area of microservices the API-Gateway is a proven Pattern. It has several advantages e.g:
It encapsulate several edge functionalities (like authentication, authorization, routing, monitoring, ...)
It hides all your microservices and controls the access to them (I don't think, you want that your clients should be able to access your microservices directly).
It may encapsulate the communication protocols requested by your microservices (sometimes the service may have a mixture of protocols internally which even are only allowed within a firewall).
An API-Gateway may also provide "API composition" (orchestrating the calls to several services an merge their results to one). It s not recommended, to implement a such composition in a microservice.
and so on
Implementing all these feature in a proxy is not trivial. There is a couple of API-Gateways which provide all these functionalities and more like the Netflix-Zuul, Spring-Gateway or the Akana Gateway.
Furthermore, in order to avoid your API-Gateway from being a bottleneck you may :
Scale your API-Gateway and load balance it (as mentioned above by Arran_Duff)
Your API-Gateway should not provide a single one-size-fits-all API for all your clients. Doing so you will, in the case of huge request amount (or large files to down/up load) for sure encounter the problems you mentioned in questions 3 and 4. Therefore in order to mitigate a such situation your Gateway e.g may provide each client with a client specific API (a API-Gateway instance serves only a certain client type or business area..). This is exactly what Netflix has done to resolve this problem (see https://medium.com/netflix-techblog/embracing-the-differences-inside-the-netflix-api-redesign-15fd8b3dc49d)
1.Why other patterns for microservices are not generally discussed? If they are, then did I miss them out?
There are many microservice pattern under different categories such as database , service etc .This is a very good article https://microservices.io/patterns/index.html
2.If we deploy a gateway server, isn't it a bottleneck?
Yes to some extent .Q3's answers image will answer this.
3.Isn't the gateway server vulnerable to crashes/failures due to excessive requests at a single point? I believe that the load would be enormous at this point (and keeping in mind that Netflix is doing something like this). Correct me if I am wrong in understanding.
4.Stream/download/upload data (like files, videos, images) will also be passing through the gateway server with other middleware services?
Why can't we use the proxy pattern instead of Gateway?
The use case for an API Proxy versus an API Gateway depends on what kinds of capabilities you require and where you are in the API Lifecycle. If you already have an existing API that doesn’t require the advanced capabilities that an API Gateway can offer than an API Proxy would be a recommended route.
You can save valuable engineering bandwidth because proxies are much easier to maintain and you won’t suffer any negligible performance loss. If you need specific capabilities that a proxy doesn’t offer you could also develop an in-house layer to accommodate your use case. If you are earlier in the API lifecycle or need the extra features that an API Gateway can provide, then investing in one would pay dividends.
I am building out a microservice architecture and kinda confused on one part. I am using Kafka as a message broker system to communicate within my services. A perfect example would be Uber's API for request estimation. It returns duration, distance, price, etc. I would assume they have a microservice for each of those, i.e. service for pricing, service for duration/distance, service for drivers, etc. My question is when hitting the endpoint /requests/estimate does the requests microservice make rest calls to the other microservices to retrieve data for the duration, distance, etc? or does the API Gateway take care of that?
I say it depends on the use case. If service A needs to know what service B knows, then it is perfectly sane for service A to make a REST call to service B. But if the combined knowledge for A and B is only needed in your gateway then the gateway can combine the results.
Both are perfectly valid ways of doing it, but I would go the Estimate microservice way to avoid putting too much logic in the API Gateway.
Maybe in the future your estimation calculation will change, and it wouldn't make much sense to me to update the gateway every time.
In practice, not all gateway APIs support multiple calls and aggregation. In the micro service architecture, there is a common pattern ("API Composition", "Composition Patterns" in particular "Aggregator Pattern"), the idea of which is that you need to make a separate service that will contain the business logic of multiple calls and aggregation.