Consistency handling in gateway aggregation? - microservices

We have a scenario which is candidate for https://learn.microsoft.com/en-us/azure/architecture/patterns/gateway-aggregation . But we have an additional complexity that this aggregation should implement consistency among these service calls. Is it a good pattern to implement a saga in the gateway

Related

What is the general term for algorithms that handle receiving requests?

Stuff like debouncing, the leaky bucket algorithm, throttling, etc.
I really want to know more about these and I want to find similar algorithms that handle receiving multiple requests at a time in a server.
The algorithms you mentioned are used for rate limiting.

How to choose optimal circuit breaker parameters for microservices?

I was watching this video by java brains (https://www.youtube.com/watch?v=CSqxIKJhFRI&list=PLqq-6Pq4lTTbXZY_elyGv7IkKrfkSrX5e&index=14). At the timestamp 3:48, he states the parameters for microservice circuit breakers, in his experience, are best chosen by trial and error.
I was wondering if anyone could provide any resources on how to choose optimal circuit breaker parameters (ex. the parameters of hystrix for a spring boot application). Also, is there any room for using some algorithm like machine learning to predict these optimal parameters for you? I would love to know your thoughts on this subject. Thanks!

Topic design in a microservice architecture with eventing

I am following a microservice architecture with eventing on Azure Service Bus. I need to publish events to a topic so a few microservices can process it.
I face three options. I can create just one topic and publish all sorts of events to that single topic. Or, I can create a topic for each event. Or, I can create a topic for each microservice.
My question is what is considered a best practice in this scenario.
Note that the same event can generally be published to multiple topics. So you can publish an event to both an "every event from every service" firehose topic and an "every event from this service" topic.
In general, you want to think about, for any two disjoint types/categories of events, is there likely to be a consumer for which interest in type X suggests (read: weaker than a strong implication, more like a prediction worth staking a bet on if the payoff is acceptable) interest in type Y? If that seems likely, then a topic consisting of the type Z (any event which is type X or type Y) is worth having.
In many cases, especially if events of a particular type are only emitted by one particular service and that service can make some ordering guarantees, it can be worth just having a topic for "every event from this service" and then have a consumer of that topic which remixes the messages into appropriate other topics.

Separate Messaging system Inside Bounded Context

Is it a good practice to run a separate Messaging system for internal Domain Events inside Bounded Context? Or It's better to reuse the common one, which is listened by all bounded contexts?
Check out images to understand the question better:
Option one (Common RAbbitMq for all contexts:
Option two (Separate RabbitMq for each BC):
I think the first approach is totally valid. The bounded contexts are abstractions to encapsulate domain or business logic related to one context of the business however the message system is a piece that only exixts to facilitate the communication between these decoupled and hermetic bounded context so, I think that have a unique message broker shared by multiple bounded context is correct. In addition this way you will have less overhead and latency

Guidance on Patterns and recommendations on achieving database Atomicity in distributed architecture (microservices)

Folks, I am evaluating options/ pattern and practices around key challenge of maintaining db atomicity (across multiple tables) that we are facing in distributed (microservices) architecture.
Atomicity, reliability and scale all are critical for business(it might have been common across businesses, just putting it out there).
I read few articals about achieving but it all comes at a significant cost and not without certain trade offs, which I am not ready to make.
Read couple of SO questions, and one concept SAGA seems interesting, but I don’t think our legacy database is meant to handle it.
So here I am asking experts of their personal opinion, guidance and past experience so I can save time and effort without try and learn bunch of options.
Appreciate your time and effort.
CAP theorem
CAP theorem is the key when it comes to distributed systems. Start with this to know if you want availability vs consistency.
Distributed transactions
You are right, trade offs involved and there is no right single answer. when it comes to distributed transaction it's no different. In microservices architecture Atomicity is not easy to achieve. Normally we design the microservices by keeping eventual consistency in mind. Strong consistency is very hard and not a simple solution.
SAGA vs 2PC
2PC it's very easy to achieve atomicity using 2 phase commit , but that option is not for microservices. your system can't scale system since if any of the microservice goes down your transaction will hang into abnormal state and locks are very common with this approach.
SAGA is most acceptable and scaleable approach . You commit local transaction (atomically) once done you need to publish the event , and all the interested services will have to consume the event and update their own local database. If there is exception or particular microservices can't accept the event data , it would raise compensation transaction , which mean you have to reverse and undo the actions taken by all microservices against that event. This is widely accepted pattern and is scaleable.
I don't get legacy db part. What makes you think legacy DB will have problem ? SAGA has nothing to do with legacy system . It simply mean if you have to accept the event or not. If yes then save it into database. If not then raise compensated transaction so all other service can undo.
What's the right approach ?
Well it really depends on you eventually. There are many pattern around when it comes to save the transaction . Have a look at CQRS and event sourcing pattern which is used to save all the domain events. Since disturbed transactions can be complex . CQRS solve many problems e.g. eventual consistency etc.
Hope that helps! shoot me questions if you have.
One possible option is Command Query Responsibility Segregation (CQRS) - maintain one or more materialized views that contain data from multiple services. The views are kept by services that subscribe to events that each services publishes when it updates its data. For example, the online store could implement a query that finds customers in a particular region and their recent orders by maintaining a view that joins customers and orders. The view is updated by a service that subscribes to customer and order events.

Resources