Best practice to share domain model between two microservices - microservices

Are there any best practices or guidelines on how to share the domain model between two micro-services?
I have a micro-service (1) which provides end points to interact with a resource (e.g, Order) all CRUD and the other micro-service (2) which performs a specific non CRUD task on the resource (Order). The micro-service (2) almost needs all the order attributes to perform its operation. In this case, does it make sense to create a common shared lib of the domain model and share between the two services? I could technically combine 1 and 2 together but the micro-service (2) needs to support scalability as it is quite memory and CPU intensive.

As far as I can see it seems that these two services need to share the same data and you are thinking to share also the library that is used to read/modify this data.
They seem to belong to the same "bounded context" and so it would be ideal to consider them as a unique service.
If you really can't separate their data and their logic, it would be better to keep them together.
I do not think that components in the same microservices have to follow the same deployment pattern: they will be the same microservice, managed by the same team, deployed always together, sharing the same source repository, but they will be deployed with a different strategy because only the second component needs high scalability.
So same bounded context, same service, but different components.
I do not have a lot of experience and so take this as my personal thought.

Related

Microservices communication within bounded context

As a part of our DDD design, we are working on a bounded context and have identified two microservices A and B.
Service A needs to make calls to Service B via REST API. Service B already provides open API spec on how to get any data. We use openapi generator to auto generate client side DTOs.
There is exactly a one-to-one mapping between B DTO to A's domain object.
As far as I understand, we should use anti corruption layer (using hexagonal architecture) if our service is communicating with an untrusted third party service. But what should we do, if the communication is between internal services, like in the case above ?
Should microservice B API DTOs directly used as domain objects in microservice A ? This would mean that we will not create separate domain class/object in service A. We treat B's DTO as domain object in service A.
Should we create an adapter layer to convert B's DTO to domain objects in microservice A ?
Should I create domain objects in service A which would also be manual output DTO for service B ?. This is opposite of point 1, where DTOs are treated as domain objects, where as in this case, we are treating domain objects as DTOs
In my opinion, you are mixing two separate concepts:
The "anticorruption layer", which is a strategic DDD pattern
The layers in a layered or onion architecture, which are tactical patterns to separate concerns within an application
The goal of the anticorruption layer is to translate the ubiquitous language from another Bounded Context into your ubiquitous language so that your code doesn't get polluted with concepts not belonging to your Bounded Context.
The goal of the layers in a layered or onion architecture and specifically the contracts between them (the DTOs you are talking about) is to avoid changes in one part of the code (for example, adding or removing a property in a domain object in the Core) to cause issues somewhere else (like accidentally modifying the public API contract).
If I understand it correctly, your two microservices belong to the same Bounded Context. If that is the case, you shouldn't need any anticorruption layer because both microservices share the same ubiquitous language.
Now regarding the options that you propose, I'm not sure I fully understand options 2 and 3, but what I'll say is that if you are doing real microservices, you can't use option 1, as these microservices wouldn't be autonomous, and independently deployable as a change in microservice B's API would require a change and coordinated deployment of microservice A.
So, design your microservices so that you have control over how their parts evolve (be able to make non-breaking changes in their APIs, change storage strategies without having to change the Core, change the core without having to change the API, etc). Of course, if a microservice is very simple, don't over-engineer it.
The answer here is simple - There is no magic to make breaking changes non-breaking. So treat internal dependencies like you would external. That requires a high discipline on versioning of the APIs and moving versions ahead. The Service A needs to publish new API version specifications ahead of time and Service B needs to be given time to implement those. Depending on the size of your organization it can be useful to support multiple API versions concurrently (for a while that is). You may also want Team A to be monitoring usage of their API versions (e.g. how many customers are still using an old version of the API etc.).
I would not put a high focus on how exactly your API specification is documented or which technology is used to generate client bindings (if at all). This should be up to the service A owners and service B should be able to adopt whatever being offered. In general I would put the focus on simplicity and avoid too many additional layers if they don't provide clear benefit. And team B should be free to design their internals as they see best fit (even with respect to data that comes from Service A). The whole point of microservices architecture is to be able to move forward independently.

Microservices architecture for a system using multiple content providers

We have a huge monolithic application which talks to multiple providers for content.
Each of these providers have different API contracts but the overall schema is almost similar.
Right now we are using command design pattern and transforming responses from each provider into a common schema for our frontend.
What should be the right approach is deciding modules for our microservices.
As should we break them down via business logic or per provider or both business logic + provider.
Help please.
I consider individual deployability to be corner-stone of the microservices. All features of microservices can be traced back to individual deployability.
That being said the idea is to decompose business functions in such a way that each of the module remains individual deployable. A module in my opinion should own all interaction with all possible vendors - encompassing its complete business domain.
Second chapter of Sam Newman's book Monolith to Microservices expands on this area.

How to use Clean Architecture in Microservices?

I just finished reading Uncle Bob's "Clean Architecture" and now wondering how to apply it in the context of microservices!
On one hand, I think that microservices fall in the "Framework-Drivers" layer since it's an implementation on top of use-cases (they are ways to serve use-cases.) This way, we focus on the core of the app (Entities and Use-cases) and stay flexible in the implementation of the outer layers (including microservices). But since each microservice can be maintained by a different developer/team of developers, they will have a bad time when use-cases change (harder to predict who will be impacted).
On the other hand, we can split our app into multiple microservices, decoupled from each other, and apply Clean Architecture inside each microservice. The pro of this approach is that we can focus on each microservice doing one thing, and doing it well. But the problem is that we started designing using technical separations (microservices) which violates the main Clean Architecture principle of focusing on the business. Also, it will be hard to not duplicate code if two microservices uses the same entity or use-case!
I think the first scenario is the best, but I would like to have feedback from fellow developers on the long-term benefits of both scenarios, and potential troubles.
My two cents:
From Uncle Bob's words, "Micro-services are deployment option, not an architecture". Each micro-service should be deployable, maintainable by different teams (which can be in different geographical locations). Each team can choose their own architecture, programming language, tools, frameworks etc... And forcing each team to use single/same programming language or tool or architecture does not sound good. So each micro-service team must be able to pick their architecture.
How can each team code/maintain/deploy their own micro-service without conflicting with other teams code? This question brings us to how to separate micro-services. IMHO it should be separated on feature based (same principle applies to modularization of mobile application projects where independent teams should be able to work on separate modules/micro-services).
After separating micro-services, the communication between them is implementation detail. It can be done through web-socket/REST API etc... Inside each micro-service, if team decides to follow Clean Architecture, they can have multiple layers based on Clean Arch Principles (Domain/Core - Interface Adapters - Presentation/API & Data & Infrastructure). There can/will be duplicate codes on micro-services, which are OK for micro-services.
As #lww-pai-long said in his answer here splitting based on the Domain responsibilities and DDD is in most cases the best solution.
Still if you worked with a system using micro-services you soon realize that there are other things involved here as well.
DDD Bounded Context as base for micro-services
In most cases splitting your application to micro-services based on Bounded Context is the safe way to go here. From experience I would even say that in some parts of Domain you could go even further and have multiple micro-services per Bounded Context. Example would be if you have quite big part of Domain which represents one Bounded Context. Other example would be if you use CQRS for a particular Domain. Then you can end up having a Write/Domain and Views/Read micro-service.
You can read in this answer how you can split your Domain to micro-services.
It would be advisable as you said to "apply Clean Architecture inside each microservice".
Also, it will be hard to not duplicate code if two microservices uses
the same entity or use-case!
This is something that you have to deal with when working with micro-services in most cases. Duplicating code and/or data across multiple micro-service is common drawback of working with micro-services. You have to take this into account as you on the other hand get isolation and independence of the micro-service and its database. This problem can be partly solved by using shared libraries as some sort of packages. Be careful this is not the best approach for all cases. Here you can read about using common code and libraries across micro-services. Unfortunately not all advice's and principles from Uncle Bob's "Clean Architecture" can be applied when using micro-services.
Non Domain or technical operation micro-services
Usually if your solution is using micro-services you will more or less have micro-services which are not Domain specific but rather some kind of technical task's or non business operations directly. Example could be something like:
micro-service for report generation
micro-service for email generation and forwarding
micro-service for authorization/permission management
micro-service for secret management
micro-service for notification management
These are not services which you will get by splitting your solution based on DDD principles but you still need them as general solution as they could be consumed by multiple other services.
Conclusion
When working with micro-services you will most of the time have a mixture of Domain specific and Domain agnostic micro-services. I think the Clean Architecture could be looked from a little different prospective when working with micro-services.
On one hand, I think that microservices fall in the
"Framework-Drivers" layer since it's an implementation on top of
use-cases (they are ways to serve use-cases.)
It kind of does but it also falls into the other layers like Entities and Use Cases. I think it goes in the direction that if you work on Domain specific services this Diagram becomes the Architecture of each micro-service but not a concept above all micro-services. In the applications where I worked with micro-services each micro-service(the ones which are based on the DDD Bounded Context) had most of this layers if not all of them. The Domain agnostic services are an exception to this as they are not based on Domain Entities but rather on some tasks or operations like 'Create an Email', 'Create a PDF report from html template' or similar'.
I think this question may be better on Sofware Engineering but I'll answer anyway.
My approach would be to use DDD and define each microservice as a Domain Services grouping Use Cases semantically, then link Domain Services with Bounded Context.
Sam newman talk about the importance of separating microservice by domain abraction and not technical one in Building Microservices
The point he makes basically is that defining scaling strategies for microservice based on subdomain will better match the "real live" constraints observed on the production system than using technically based microservice and try to defined a abstract strategy.
And if you look at how something like Kubernetes works it seems to push to that direction. A pod end up being a microservice with multiple containers defined as a complete stack matching a sub-domain if the overhaul application.
It then gets easier in an e-commerce application, for example, to scale the Payment service independently of the Cart service based on customer activity than to scale the web services independently of the job queues in an abstract way.
The way those Bounded Contexts will communicate, i.e request based or event based, depends on the the specific relation between them. To use the same example a Cart may generate an event that will trigger the Payment, while the same Cart may need to request the Inventory before validating the order.
And at the end of a day those Domain Services* and Bounded Contexts can be implemented the same when starting with a monolith, even the Bounded Contexts communication can be. The underlying communication protocol becomes an implementation detail that can easily(kinda) be switch when transitioning to a distributed a.k.a microservices architecture.

Why does each microservice get its own database?

It seems that in the traditional microservice architecture, each service gets its own database with a different understanding of the data (described here). Sometimes it is considered permissible for databases to duplicate data. For instance, the "Users" service might know essentially everything about a user, whereas the "Posts" service might just store primary keys and usernames (so that the author of a post can have their name displayed, for instance). This page talks about eventual consistency, sources of truth, and other related concepts when data is duplicated. I understand that microservice architectures sometimes include a shared database, but most places I look suggest that this is a rare strategy.
As for why each service typically gets its own database, all I've seen so far is "so that each service owns its own resources," but I'm not convinced that a) the service layer in any way "owns" the persisted resources accessed through the database to begin with, or that b) services even need to own the resources they require rather than accessing necessary subsets of the master resources through a shared database.
So what are some of the justifications that each service in a microservice architecture should get its own database?
There are a few reasons why it does make sense to use a separate database per micro-service. Some of them are:
Scaling
Splitting your domain in micro-services is fine. You can scale your particular micro-service on the deployed web-server on demand or scale out as needed. That it obviously one of the benefits when using micro-services. More importantly you can have micro-service-1 running for example on 10 servers as it demands this traffic but micro-service-2 only requires 1 web-server so you deploy it on 1 server. The good thing is that you control this and you can manage your computing resources like in order to save money as Cloud providers are not cheap.
Considering this what about the database?
If you have one database for multiple services you could not do this. You could not scale the databases individually as they would be on one server.
Data partitioning to reduce size
Automatically as you split your domain in micro-services with each containing 1 database you split the amount of data that is stored in each database. Ideally if you do this you can have smaller database servers with less computing power and/or RAM.
In general paying for multiple small servers is cheaper then one large one.
So in this case you could make use of this fact and save some resources as well.
If it happens that the already spited by domain database have large amount of data techniques like data sharding or data partitioning could be applied additional, but this is another topic.
Which db technology fits the business requirement
This is very important pro fact for having multiple databases. It would allow you to pick the database technology which fits your Business requirement best in order to get the best performance or usage of it. For example some specific micro-service might have some Read-heavy operations with very complex filter options and a full text search requirement. Using Elastic Search in this case would be a good choice. Some other micro-service might use SQL Server as it requires SQL specific features like transnational behavior or similar. If for some reason you have one database for all services you would be stuck with the particular database technology which might not be so performant for those requirement. It is a compromise for sure.
Developer discipline
If for some reason you would have a couple micro-services which would share their database you would need to deal with the human factor. The developers would need to be disciplined to not cross domains and access/modify the other micro-services database(tables, collections and etc) which would be hard to achieve and control. In large organisations with a lot of developers this could be a serious problem. With a hard/physical split this is not an issue.
Summary
There are some arguments for having database per micro-service but also some against it. In general the guidelines and suggestions when using micro-services are to have the micro-service together with its data autonomous in order to work independent in Ideal case(this is not the case always). It is defiantly a compromise as well as using micro-services in general. As always the rule is the rule but there are exceptions to it. Micro-services architecture is flexible and very dependent of your Domain needs and requirements. If you and your team identify that it makes sense to merge multiple micro-service databases to 1 and that it solves a lot of your problems then go for it.
Microservices
Microservices advocate design constraints where each service is developed, deployed and scaled independently. This philosophy is only possible if you have database per service. How can i continue my business if i have DB failure and what steps i can take to mitigate this?DB is essential part of any enterprise application. I agree there are different number of challenges when services has its own databases.
Why Independent database?
Unlike other approaches this approach not only keeps your code-base clean and extendable but you truly omit the single point of failure in your business. To achieve this services sometimes can have duplicated data as well, as long as my service is autonomous and services can only be autonomous if i have database per service.
From business point of view, Lets take eCommerce application. you have microserivces like Booking, Order, Payment, Recommendation , search and so on. Database is shared. What happens if the DB is down ? All your services are down ! and there is no point using Microservies architecture other than you have clean code base.
If you have each service having it's own database , i don't mind if my recommendation service is not working but i can still search and book the order and i haven't lost the customer. that's the whole point.
It comes at cost and challenges, but in longer run it pays off.
SQL / NoSQL
Each service has it's own needs. To get the best performance I can use SQL for payment service (transaction) and I can use (I should) NoSQL for recommendation service. Shared database wouldn't help me in this case. In modern cloud Architectures like CQRS, Event Sourcing, Materialized views, we sometimes use 2 different databases for same service to get the performance out of it.
Again Database per service is not only about resources or how much data should it own. But we really have to see the bigger picture. Yes we have certain practices how much data and duplication is good or bad but that's another debate.
Hope that helps !

How to reduce redundancy in a service implemented using multilayer architecture while maintaining consistency across the system?

Currently our service is implemented using a multilayer architecture dividing the whole service into three:
API
Business
Persistence
However this introduces a lot of redundancy within our system. A common adage in the industry is "DRY" (Don't Repeat Yourself). The redundancy has increased the development time, and made the system more fragile and cluttered our code with "copy" methods.
To give a better idea, say we have a Person service. This would require the following:
Person entity - JPA annotated class for ORM
Repository service request - contains field values to be persisted of the Person domain object with additional persistence options
Repository service response - contains field values of the Person entity
Person - class with business logic, domain fields and computed fields
Domain service request - contains field values of Person resource and additional business options
Domain service response - contains field values of Person business object excluding those that shouldn't be visible to API users
Person resource - class representing what should be viewable to the API users
And things get worse when taking nested objects into consideration.
The current design facilitates difference between concerns (business, API, persistence), however:
Currently, the differences are very small. This is causing us to have
very similar classes with only minor differences
Services returning
service response objects with fields instead of just the objects
itself hampers other services from depending on other services
Questions:
Is it worth it to go through with this design?
What are our alternatives?
What could we change to improve our situation?
I know where you're coming from. My shortest advice would be: read "Domain Driven Desing - Tackling Complexity Inside the Heart of Software" by Eric Evans.
A central part of the DDD is the domain: POJOs containing majority of the business logic.
The building blocks are more or less what you've already mentioned.
There are three kinds of services:
Application Services that are responsible for orchestration, transaction management and authorization
Domain Services contain business logic that doesn't fit other domain building blocks: entities, policies, factories, value objects. Create them only if you can't use other domain mechanisms.
Infrastructure Services. The most common are repositories which are responsible for persistence of root aggregates (this role play some of the entities), and only them. This is contrast with DAOs which are created for any entity. Other infrastructure services might be for instance clients of Web Services that are being used by the application.
This richness of different kinds of services together with the idea of pushing the logic down as far as possible, because the logic in the domain is the most easily reusable, gives the developers tools they need to build comprehensive and maintainable complex software. Note that DDD might be too heavy for simple CRUD apps.
The entry points to the system are either Web Services endpoints or controllers (for Web apps where UI is generated on the backend like in case of JSPs of JSFs).
For the middle sized systems I like to use approach inspired by CQRS, that is, in order to avoid inevitable slowness when loading multiple root aggregates for displaying purposes (read side) I write dedicated query services that return DTOs straight from the DB, in case of JPA using select new mechanism.

Resources