How to use Clean Architecture in Microservices? - microservices

I just finished reading Uncle Bob's "Clean Architecture" and now wondering how to apply it in the context of microservices!
On one hand, I think that microservices fall in the "Framework-Drivers" layer since it's an implementation on top of use-cases (they are ways to serve use-cases.) This way, we focus on the core of the app (Entities and Use-cases) and stay flexible in the implementation of the outer layers (including microservices). But since each microservice can be maintained by a different developer/team of developers, they will have a bad time when use-cases change (harder to predict who will be impacted).
On the other hand, we can split our app into multiple microservices, decoupled from each other, and apply Clean Architecture inside each microservice. The pro of this approach is that we can focus on each microservice doing one thing, and doing it well. But the problem is that we started designing using technical separations (microservices) which violates the main Clean Architecture principle of focusing on the business. Also, it will be hard to not duplicate code if two microservices uses the same entity or use-case!
I think the first scenario is the best, but I would like to have feedback from fellow developers on the long-term benefits of both scenarios, and potential troubles.

My two cents:
From Uncle Bob's words, "Micro-services are deployment option, not an architecture". Each micro-service should be deployable, maintainable by different teams (which can be in different geographical locations). Each team can choose their own architecture, programming language, tools, frameworks etc... And forcing each team to use single/same programming language or tool or architecture does not sound good. So each micro-service team must be able to pick their architecture.
How can each team code/maintain/deploy their own micro-service without conflicting with other teams code? This question brings us to how to separate micro-services. IMHO it should be separated on feature based (same principle applies to modularization of mobile application projects where independent teams should be able to work on separate modules/micro-services).
After separating micro-services, the communication between them is implementation detail. It can be done through web-socket/REST API etc... Inside each micro-service, if team decides to follow Clean Architecture, they can have multiple layers based on Clean Arch Principles (Domain/Core - Interface Adapters - Presentation/API & Data & Infrastructure). There can/will be duplicate codes on micro-services, which are OK for micro-services.

As #lww-pai-long said in his answer here splitting based on the Domain responsibilities and DDD is in most cases the best solution.
Still if you worked with a system using micro-services you soon realize that there are other things involved here as well.
DDD Bounded Context as base for micro-services
In most cases splitting your application to micro-services based on Bounded Context is the safe way to go here. From experience I would even say that in some parts of Domain you could go even further and have multiple micro-services per Bounded Context. Example would be if you have quite big part of Domain which represents one Bounded Context. Other example would be if you use CQRS for a particular Domain. Then you can end up having a Write/Domain and Views/Read micro-service.
You can read in this answer how you can split your Domain to micro-services.
It would be advisable as you said to "apply Clean Architecture inside each microservice".
Also, it will be hard to not duplicate code if two microservices uses
the same entity or use-case!
This is something that you have to deal with when working with micro-services in most cases. Duplicating code and/or data across multiple micro-service is common drawback of working with micro-services. You have to take this into account as you on the other hand get isolation and independence of the micro-service and its database. This problem can be partly solved by using shared libraries as some sort of packages. Be careful this is not the best approach for all cases. Here you can read about using common code and libraries across micro-services. Unfortunately not all advice's and principles from Uncle Bob's "Clean Architecture" can be applied when using micro-services.
Non Domain or technical operation micro-services
Usually if your solution is using micro-services you will more or less have micro-services which are not Domain specific but rather some kind of technical task's or non business operations directly. Example could be something like:
micro-service for report generation
micro-service for email generation and forwarding
micro-service for authorization/permission management
micro-service for secret management
micro-service for notification management
These are not services which you will get by splitting your solution based on DDD principles but you still need them as general solution as they could be consumed by multiple other services.
Conclusion
When working with micro-services you will most of the time have a mixture of Domain specific and Domain agnostic micro-services. I think the Clean Architecture could be looked from a little different prospective when working with micro-services.
On one hand, I think that microservices fall in the
"Framework-Drivers" layer since it's an implementation on top of
use-cases (they are ways to serve use-cases.)
It kind of does but it also falls into the other layers like Entities and Use Cases. I think it goes in the direction that if you work on Domain specific services this Diagram becomes the Architecture of each micro-service but not a concept above all micro-services. In the applications where I worked with micro-services each micro-service(the ones which are based on the DDD Bounded Context) had most of this layers if not all of them. The Domain agnostic services are an exception to this as they are not based on Domain Entities but rather on some tasks or operations like 'Create an Email', 'Create a PDF report from html template' or similar'.

I think this question may be better on Sofware Engineering but I'll answer anyway.
My approach would be to use DDD and define each microservice as a Domain Services grouping Use Cases semantically, then link Domain Services with Bounded Context.
Sam newman talk about the importance of separating microservice by domain abraction and not technical one in Building Microservices
The point he makes basically is that defining scaling strategies for microservice based on subdomain will better match the "real live" constraints observed on the production system than using technically based microservice and try to defined a abstract strategy.
And if you look at how something like Kubernetes works it seems to push to that direction. A pod end up being a microservice with multiple containers defined as a complete stack matching a sub-domain if the overhaul application.
It then gets easier in an e-commerce application, for example, to scale the Payment service independently of the Cart service based on customer activity than to scale the web services independently of the job queues in an abstract way.
The way those Bounded Contexts will communicate, i.e request based or event based, depends on the the specific relation between them. To use the same example a Cart may generate an event that will trigger the Payment, while the same Cart may need to request the Inventory before validating the order.
And at the end of a day those Domain Services* and Bounded Contexts can be implemented the same when starting with a monolith, even the Bounded Contexts communication can be. The underlying communication protocol becomes an implementation detail that can easily(kinda) be switch when transitioning to a distributed a.k.a microservices architecture.

Related

Microservices communication within bounded context

As a part of our DDD design, we are working on a bounded context and have identified two microservices A and B.
Service A needs to make calls to Service B via REST API. Service B already provides open API spec on how to get any data. We use openapi generator to auto generate client side DTOs.
There is exactly a one-to-one mapping between B DTO to A's domain object.
As far as I understand, we should use anti corruption layer (using hexagonal architecture) if our service is communicating with an untrusted third party service. But what should we do, if the communication is between internal services, like in the case above ?
Should microservice B API DTOs directly used as domain objects in microservice A ? This would mean that we will not create separate domain class/object in service A. We treat B's DTO as domain object in service A.
Should we create an adapter layer to convert B's DTO to domain objects in microservice A ?
Should I create domain objects in service A which would also be manual output DTO for service B ?. This is opposite of point 1, where DTOs are treated as domain objects, where as in this case, we are treating domain objects as DTOs
In my opinion, you are mixing two separate concepts:
The "anticorruption layer", which is a strategic DDD pattern
The layers in a layered or onion architecture, which are tactical patterns to separate concerns within an application
The goal of the anticorruption layer is to translate the ubiquitous language from another Bounded Context into your ubiquitous language so that your code doesn't get polluted with concepts not belonging to your Bounded Context.
The goal of the layers in a layered or onion architecture and specifically the contracts between them (the DTOs you are talking about) is to avoid changes in one part of the code (for example, adding or removing a property in a domain object in the Core) to cause issues somewhere else (like accidentally modifying the public API contract).
If I understand it correctly, your two microservices belong to the same Bounded Context. If that is the case, you shouldn't need any anticorruption layer because both microservices share the same ubiquitous language.
Now regarding the options that you propose, I'm not sure I fully understand options 2 and 3, but what I'll say is that if you are doing real microservices, you can't use option 1, as these microservices wouldn't be autonomous, and independently deployable as a change in microservice B's API would require a change and coordinated deployment of microservice A.
So, design your microservices so that you have control over how their parts evolve (be able to make non-breaking changes in their APIs, change storage strategies without having to change the Core, change the core without having to change the API, etc). Of course, if a microservice is very simple, don't over-engineer it.
The answer here is simple - There is no magic to make breaking changes non-breaking. So treat internal dependencies like you would external. That requires a high discipline on versioning of the APIs and moving versions ahead. The Service A needs to publish new API version specifications ahead of time and Service B needs to be given time to implement those. Depending on the size of your organization it can be useful to support multiple API versions concurrently (for a while that is). You may also want Team A to be monitoring usage of their API versions (e.g. how many customers are still using an old version of the API etc.).
I would not put a high focus on how exactly your API specification is documented or which technology is used to generate client bindings (if at all). This should be up to the service A owners and service B should be able to adopt whatever being offered. In general I would put the focus on simplicity and avoid too many additional layers if they don't provide clear benefit. And team B should be free to design their internals as they see best fit (even with respect to data that comes from Service A). The whole point of microservices architecture is to be able to move forward independently.

Microservices architecture for a system using multiple content providers

We have a huge monolithic application which talks to multiple providers for content.
Each of these providers have different API contracts but the overall schema is almost similar.
Right now we are using command design pattern and transforming responses from each provider into a common schema for our frontend.
What should be the right approach is deciding modules for our microservices.
As should we break them down via business logic or per provider or both business logic + provider.
Help please.
I consider individual deployability to be corner-stone of the microservices. All features of microservices can be traced back to individual deployability.
That being said the idea is to decompose business functions in such a way that each of the module remains individual deployable. A module in my opinion should own all interaction with all possible vendors - encompassing its complete business domain.
Second chapter of Sam Newman's book Monolith to Microservices expands on this area.

Best practice to share domain model between two microservices

Are there any best practices or guidelines on how to share the domain model between two micro-services?
I have a micro-service (1) which provides end points to interact with a resource (e.g, Order) all CRUD and the other micro-service (2) which performs a specific non CRUD task on the resource (Order). The micro-service (2) almost needs all the order attributes to perform its operation. In this case, does it make sense to create a common shared lib of the domain model and share between the two services? I could technically combine 1 and 2 together but the micro-service (2) needs to support scalability as it is quite memory and CPU intensive.
As far as I can see it seems that these two services need to share the same data and you are thinking to share also the library that is used to read/modify this data.
They seem to belong to the same "bounded context" and so it would be ideal to consider them as a unique service.
If you really can't separate their data and their logic, it would be better to keep them together.
I do not think that components in the same microservices have to follow the same deployment pattern: they will be the same microservice, managed by the same team, deployed always together, sharing the same source repository, but they will be deployed with a different strategy because only the second component needs high scalability.
So same bounded context, same service, but different components.
I do not have a lot of experience and so take this as my personal thought.

Microservices, Dependencies and Events

I’ve been doing a lot of googling regarding managing dependencies between microservices. We’re trying to move away from big monolithic app into micro-services in order to scale organizationally and be able to develop faster and with multiple teams working in parallel.
However, as we’re trying to functionally partition the monolith into the microservices, we see how intertwined business logic and data really is. This was not a problem when we were sitting on top of one big DB and were able to do big relational joins. But with microservices, this becomes a problem.
One solution is to make microservice-A go to 5-10 other microservices to get necessary data (this is equivalent of DB view with join). Another solution is to make microservice-A listen to events from 5-10 other services and populate local storage with relevant into (this is an equivalent of materialized view). Either way, microservice-A is coupled with 5-10 other services, and if new info is needed in microservice-A, the some of the services that it depends upon might will need to be release prior to microservice-A. Please note that microservice-A is itself depended upon by other services. Bottom line, we end up with DISTRIBUTED dependency hell.
Many articles advocate for second solution – i.e. something along the lines of Event Sourcing, Choreography, etc.
I would appreciate any shared experiences, recommendations and insights.
Philometor.
While not technically an "answer", I can definitely share some of my observations and experiences. Your question concerning services calling other services for database operations reminded me of a project where an architect sold senior management on the idea of "decoupling" persistence from the rest of the applications by implementing hundreds of REST interfaces in what essentially was a distributed DAO pattern in front of a very large enterprise database. The project ended up exactly the way I predicted - a dismal failure.
Microservices aren't about turning a monolithic application into a distributed monolithic application. In my example project above, the monolith was turned into a stove-piped, fragile, chaotic mess, with the coupling only moved to service contracts instead of Java class method signatures, and with a performance hit so bad the application was unusable. Last I heard they are still running their original monolith.
Microservices should be more of a vertical partitioning of your application and not a horizontal one. In my opinion it's better to think in terms of business function partitioning rather than "converting" an existing monolith. There's no rule that determines how big a microservice must be, but it should be big enough to do one complete synchronous function without needing to directly depend on outside services (as much as possible) to complete its work. If a microservice performs a complex business function that affects 50 tables, so be it! It owns those many tables. Ideally if a service goes down, it should affect only that business functionality it's responsible for, and not directly affect other services. As you can see, this thinking is the complete opposite from that which produced the distributed mess in my project example.
Not only do you need to ensure that the motivation behind replacing monoliths with microservices is sound, but also you need to step outside the monolith and revisit the actual business and begin partitioning that instead. Like everything else, baby steps are the way to go. Start with one small complete business function, and convert that into a single microservice instead of trying to replace a monolith all at once.

Best Practices when Migrating to Microservices

To anyone with real world experience breaking a monolith into separate modules and services.
I am asking this question having already read the MonolithFirst blog entry by Martin Fowler. When taking a monolith and breaking it into microservices the "size" element of the equation is the one that I ponder over the most. Specifically, how to approach breaking a monolith application (we're talking 2001: A Space Oddessy; as in it is that old and that large) into micro services without getting overly fine grained or staying too monolithic. The end goal is creating separate modules that can be upgraded indepenently and scaled independently.
What are some recommended best practices based on personal experience of breaking a monolith into microservices?
The rule of thumb is breaking the monolith based on bounded context . The most common way of defining the bounded context is using BU ( Business Unit) . For example the module which does actual payment is mostly a separate BU .
The second thing to consider is the overhead micro-services bring. You should analyse the hardware , monitoring , infra pieces before completely breaking the service. What I have seen is people taking smaller microservices out of monolith instead of going and writing say 10 new service and depreciating the monolith.
My advice will be have an incremental approach . Take the first BU which is being worked upon out of monolith. This will also give a goos learning curve for the whole team.
You should clearly distinguish sub-domain areas (bounded contexts) from you domain.
Usually (if everything is fine with your architecture) you already have some separate components in your monolith application which responsible for each sub-domain. These components interact with each other in one process
(in monolith application) and you should to think about how to put them into separate processes. Of course you need to produce a lot of refactoring when moving one by one parts of the monolith to microservices.
Always remember that every microservice is responsible for some sub-domain.
I strongly recommend you to learn Domain Driven Design.
Domain-Driven Design: Tackling Complexity in the Heart of Software by Eric Evans
Implementing Domain-Driven Design by Vaughn Vernon
Also learn CQRS pattern
At the beginning you also should decide how your micservices will interact with each other.
There are several options:
Direct calls from one service to another
Send messages through some dispatcher service
which abstracts the client service from the knowledge where the called (destination) services are located.
This approach is similar to how proxy server like NGINX works.
Interact through some messaging bus (middleware), like RabbitMQ
You can combine these options, for example Query requests can be processed through Dispatcher Service, Commands and Events through message bus.
From my experience the biggest problem will be to go away from a single database,
which monolith applications is usually used.
In addition some good practices:
Put each microservice in own repository - this isolates from the ability to directly use the code of one micro service in another.
You also get faster checkouts and builds of each microservice on CI.
Interactions with any service should occur only through its public contracts.
It is necessary to aspire that each microservice has its own database
Example of the sub-domains (bounded contexts) for some Tourism Industry application.
Each bounded context can be serviced by a microservice.
We also started our journey some time back and i started writing a blog series for exactly the same thing: https://dzone.com/articles/how-i-started-my-journey-in-micro-services-and-how
Basically what i understood is to break my problem in diff. microservices, i need a design framework which Domain Driven Design gives(Domain Driven Design Distilled Book by Vaugh Vernon).
Then to implement the design (using CQRS and Event Sourcing and ...) i need a framework which provides all the above support.
I found Lagom good for this.(Eventuate , Spring Microservices are some other choices).
Sample Microservices Domain analysis using Domain Driven Design by Microsoft: https://learn.microsoft.com/en-us/azure/architecture/microservices/domain-analysis
One more analysis is: http://cqrs.nu/tutorial/cs/01-design
After reading on Domain Driven Design i think lagom and above links will help you to build a end to end application. If still any doubts , please raise :)

Resources