Is it a good practice to run a separate Messaging system for internal Domain Events inside Bounded Context? Or It's better to reuse the common one, which is listened by all bounded contexts?
Check out images to understand the question better:
Option one (Common RAbbitMq for all contexts:
Option two (Separate RabbitMq for each BC):
I think the first approach is totally valid. The bounded contexts are abstractions to encapsulate domain or business logic related to one context of the business however the message system is a piece that only exixts to facilitate the communication between these decoupled and hermetic bounded context so, I think that have a unique message broker shared by multiple bounded context is correct. In addition this way you will have less overhead and latency
Related
So at first I though we would have 1:1 relation between a service and bounded context. I though strategic design in DDD would help decomposing Domain into several services and reduce a lot of complexity, but I was wrong. Actually you can have many services inside of bounded context not just one. So how would bounded context help when doing microservices since you still have those messy services but just inside of a bounded context with specific ubiquitous language.
Is it a cardinal rule of microservices that a single database table should only be represented by a single microservice? I was asked that in an interview. My first reaction was that it should only be 1 to 1. But then I think I was overthinking it, thinking that maybe there are some edge case scenarios where that may be acceptable.
So is it a cardinal rule of microservices that a single database table should always be represented by a single microservice? Or are there some edge case scenarios where that may be acceptable? If it's a cardinal rule then is there any type of standard acronym that includes that principal? For example, relational databases have the ACID principals.
It is not a cardinal rule. But, it is the most effective way to manage data. Design patterns are not set in stone. You may choose to handle things differently.
But, each microservice should be independent. This is why we use the microservices architecture. But, say you update a table using multiple microservices, then they (the services) become interdependent. Loose coupling no longer exists. The services will impact each other any time a change takes place.
This is why, you may want to follow the following paradigms:
Private-tables-per-service – each service owns a set of tables that
must only be accessed by that service.
Schema-per-service – each service has a database schema that’s
private to that service
Database-server-per-service – each service has it’s own database
server.
Refer to the data management section here for more: https://microservices.io/patterns/
It is not just a separate database for individual microservices, there are other factors that need to consider while developing microservices like codebase, config, log etc.
Please refer to below link which explains in detail.
https://12factor.net/
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 days ago.
Improve this question
Which are the conditions based on which a system should be split in micro-services and how "small" should a micro service be?
We where implementing micro-services architecture in multiple projects and I
will try to share my experience with them and how we where doing it.
Let me first try to explain how we split our Domain into micro-services. During this the
criteria for how small or how big the micro-service should be will be explained as well. In order to
understand that we need to see the whole approach:
Conditions based on which we split our system in micro-services:
We where splitting the micro-services based on 2 set of environments:
Based on Production/Staging setup.
This is in general how would the system run on a Production environment to be used
by our customers.
Based on Development(Developers development machine) setup.
This the setup which each developer has to have on their machine in order
to run/debug/develop the full system or parts of it.
Considering only the Production/Staging setup:
Based on DDD(Domain Driven Design) Bounded Context:
Where Bounded Context = 1 micro-service.
This was the biggest ms that we ended up having. Most of the time the
Bounded Context was also split into multiple micro-services.
Why?
The reason is that our Domain was very big so keeping the whole Bounded Context
as one micro-service was very inefficient. By inefficient I mean mostly scaling reasons, but
also development scaling(having smaller teams taking care of one micro-service) and some other reasons
as well.
CQRS(Command Query Segregation Principle):
After splitting to micros-service per Bounded Context or multiple micro-services per 1 Bounded Context
for some of those micro-services we had them split in 2 or more microservices instead of 1. One Command/Write micro-service and the second was the Read/Query micro-service.
For example
lets say you have a "Users micro-service" and "Users-Read micro-service". The "Users micro-service" was responsible for
creating, updating, deleting and general management of Users. On the other hand the "Users-Read micro-service" was just responsible
for retrieving Users(it was read-only). We where following the CQRS pattern.
The Write/Domain micro-service in some extreme cases had multiple Read micro-services. Sometimes these Read-micro-service where so small that they where just having one De-Normalized View-like
representation mostly using some kind of No-SQL db for fast access. In some cases it was so small that
from code prospective it would just have a couple of C#/Java classes in it and 1,2 Tables or JSON Collections
in their Database.
Service which provides Domain agnostic or Static work or services
Example 1 of this was a very small micro-service which was responsible for generating
specific reports in form of pdf from an html template.
Example 2 was a micro-service which was just sending simple text messages to some specific users
based on their permissions.
Considering the Development Setup:
In addition to the ones for Production/Staging setup for local purpose of developing/running we needed
special micro-services which would do some work in order to make the local setup work.
The local setup was done using docker(docker-compose).
Examples of small micro-services there was:
Database, Cache, Identity, Api Gateway and File Storage.
For all this things in the Production/Staging setup we where using a Cloud provider which
provides services for these things so we did not have a need to put them in micro-services.
But in order to have a whole system running for development/testing purposes we needed to
create small micro-services in form of Docker containers to replace these Cloud services.
Adding test/seeding data into the system.
In order to feed the local development system of micro-services with data we needed to have a
small micro-service which sole purpose was to call some API's exposed by the micro-services and
post some data into them.
This way we could setup a working development environment with predefined data in order to test
some simple business scenarios.
The good thing about this is that you can use this test data setup in combination with the local
development setup for creating integration and end-to-end tests.
How small a micro-service should be?
In one our cases the smallest micro-services where a couple of
View/Read-only mircro-services which had only one De-Normalized View(1 Table or 1 JSON Collection) which from code prospective
had a couple of C#/Java classes in it. So when it comes to code I don't think that much smaller then this would
be a reasonable option. Again this is subjective view.
This size can be considered as "too small" based on some suggestions around micro-service
which you can read about online. We did that because it helped us solve performance issues.
The demand for this data was so big that we isolated it so that we can scale it independently.
This gave us the possibility to scale this micro-service/view individualy based on its needs and independently
from the rest of that Domain.
Second case of a small micro-service is the html-to-pdf service which was just creating pdf documents based
on specific formatted html template. You can imagine how small this subset of functionality was.
Suggestion:
My suggestion for everyone designing micro-services would be to ask the right questions:
How big should micro-service be so that we don't have situation with monolith split into multiple monoliths?
This means that the size of the created micro-services would be too big and hard to manage as this was the problem
for monoliths. On top of this you get the drawbacks of distributed systems.
Is that size of a micro-service going to affect your performance?
For you customers the key is that the system performance is good, so considering this as
criteria for micro-services system architecture could be a very valid point.
Should or can we extract some critical part of the functionality/logic in order to isolate it?
Which critical logic is so important that you can not afford to have it broken or service-downtime
for it?
This way you can protect your most critical part of the system.
Can I organize my team or teams with this kind of micro-services architecture split?
Considering I have the micro-services architecture done and I have "n" micro-services
how will I manage them? This means support them, orchestrate deployment, scale based on needs,
monitor and so on?
If this architecture that you came up turns out to be challenging and not manageable for your
organisation or teams then reconsider them. Nobody needs an unmanageable system.
There are many more which could could lead you to the right directions but these where the ones we where following.
Answers to those questions will automatically lead you to the smallest/biggest possible micro-service
for your Domain/Business. My example about micro-services size from above might now work for your case
and the rules that we where using as well, but answering these questions will bring you closer to your
own approach/rule to decide that.
Conclusion
The micro-services should be as small as you need it to fit your needs. The name "micro" indicate that they can be very small.
Be careful to not make this a rule for all your system.
The smallest micro-services are rather an exception to solve some specific problem like scaling or
critical logic isolation rather then a rule for designing/splitting the whole system in micro-services of that size.
If you have to many very small micro-services just for the sake of having them and them being small you will have hard time
in managing them with no real benefit. Be careful how you split it.
Folks, I am evaluating options/ pattern and practices around key challenge of maintaining db atomicity (across multiple tables) that we are facing in distributed (microservices) architecture.
Atomicity, reliability and scale all are critical for business(it might have been common across businesses, just putting it out there).
I read few articals about achieving but it all comes at a significant cost and not without certain trade offs, which I am not ready to make.
Read couple of SO questions, and one concept SAGA seems interesting, but I don’t think our legacy database is meant to handle it.
So here I am asking experts of their personal opinion, guidance and past experience so I can save time and effort without try and learn bunch of options.
Appreciate your time and effort.
CAP theorem
CAP theorem is the key when it comes to distributed systems. Start with this to know if you want availability vs consistency.
Distributed transactions
You are right, trade offs involved and there is no right single answer. when it comes to distributed transaction it's no different. In microservices architecture Atomicity is not easy to achieve. Normally we design the microservices by keeping eventual consistency in mind. Strong consistency is very hard and not a simple solution.
SAGA vs 2PC
2PC it's very easy to achieve atomicity using 2 phase commit , but that option is not for microservices. your system can't scale system since if any of the microservice goes down your transaction will hang into abnormal state and locks are very common with this approach.
SAGA is most acceptable and scaleable approach . You commit local transaction (atomically) once done you need to publish the event , and all the interested services will have to consume the event and update their own local database. If there is exception or particular microservices can't accept the event data , it would raise compensation transaction , which mean you have to reverse and undo the actions taken by all microservices against that event. This is widely accepted pattern and is scaleable.
I don't get legacy db part. What makes you think legacy DB will have problem ? SAGA has nothing to do with legacy system . It simply mean if you have to accept the event or not. If yes then save it into database. If not then raise compensated transaction so all other service can undo.
What's the right approach ?
Well it really depends on you eventually. There are many pattern around when it comes to save the transaction . Have a look at CQRS and event sourcing pattern which is used to save all the domain events. Since disturbed transactions can be complex . CQRS solve many problems e.g. eventual consistency etc.
Hope that helps! shoot me questions if you have.
One possible option is Command Query Responsibility Segregation (CQRS) - maintain one or more materialized views that contain data from multiple services. The views are kept by services that subscribe to events that each services publishes when it updates its data. For example, the online store could implement a query that finds customers in a particular region and their recent orders by maintaining a view that joins customers and orders. The view is updated by a service that subscribes to customer and order events.
So my question is as follows, is it possible to move an actor inside the system boundary of a use case diagram? Can it be a part of the system.
I set a server as an actor, in where a customer interacts with the server in an e-commerce environment. Is it possible or should I move the server inside of the system? Since the server is a part of the system that the customer is interacting with.
This server is most likely then going to be used by an admin role.
TL;DR
No, you can't do that, unless you model only a part of the system.
Explanation
By definition an actor is external to the system. It can be a user, other system or a sensor.
If you want to show a system decomposition into smaller parts, use component diagram.
Note, the role of a use case diagram is to show functions of the system as a whole.
On the other hand you may depict just one part of the system (ie. system tier). In such case other parts (tiers) are external to the modeled system part under consideration.
I suppose you mean "move an actor inside the system boundary" since in any case the actor appears inside the UC diagram (or you just won't see it).
You can do that. However, it would be rather pointless since actors are meant to interact with the system under consideration (SUC) from outside. The only case where you can do that is, when you create sub-systems (that is you have boundaries of sub-systems within the SUC boundary). I wouldn't do that either from the very beginning. Only in a later design phase you could introduce such a construct. In that case you'd have independent teams working on the different sub-systems and one on the integration for the SUC. For "normally" sized systems you should leave these sub-systems just away and focus on actors and their UCs inside the SUC boundary.