I've been trying to refactor a brand new project to follow the hexagonal architecture and ddd patterns.
This is the structure of my domain. I have files and customer data. Entity wise this makes sense to be separated. The "facade" objects connect the ports with the domain. Quick example:
Controller (application layer) --uses--> Facade --uses--> Ports <--implement-- Adapters (infrastructure layer)
The problem I have is I have a third adapter (not in the picture) that is an external OCR app. It's an external client (we use a feign client to connect their API) and it provides customer data (first adapter), but also serves us with the raw data of images (second adapter).
My first two adapters have entities, repos and databases on our local systems but, this third one, to me makes sense given the theory behind hexagonal architecture, to be separated in its own adapter.
But then how do I use it from my other two adapters? Should the three of them be in the same adapter since they depend on each other? CustomerData and File have a One To Many relationship as well so maybe it makes sense?
I have only implemented the File part so far and have yet to refactor the CustomerData part since I'm trying to wrap my head around the concepts first.
I've seen a lot of articles but most of them are really simple with no real world examples and they have clearly separated domains.
Thanks a lot for the clarification in advance.
In lack of a better idea, since the interface ports are beans implemented by the facades, I'm wiring the ports I need in the other domain's facades and using them the same way as if it was a controller of that same domain. The diagram would be something similar to:
Facade (domain1) --uses--> Port (of domain2) <--implement-- Adapters (infrastructure layer)
Edit:
I've found out a very extensive article that is very useful to understand hexagonal architecture but goes even deeper.
Long story short, I'll copy the relevant part:
Triggering logic in other components
When one of our components (component B) needs to do something whenever something else happens in another component (component A), we can not simply make a direct call from component A to a class/method in component B because A would then be coupled to B.
However we can make A use an event dispatcher to dispatch an application event that will be delivered to any component listening to it, including B, and the event listener in B will trigger the desired action. This means that component A will depend on an event dispatcher, but it will be decoupled from B.
Hexagonal Architecture doesn't forbid the relationships between adapters.
Anyway, usually we will have a port for each external actor interacting with our business logic, and an adapter to translate to/from the actor.
You can take a look at this:
https://jmgarridopaz.github.io/content/hexagonalarchitecture-ig/chapter1.html
Related
As a part of our DDD design, we are working on a bounded context and have identified two microservices A and B.
Service A needs to make calls to Service B via REST API. Service B already provides open API spec on how to get any data. We use openapi generator to auto generate client side DTOs.
There is exactly a one-to-one mapping between B DTO to A's domain object.
As far as I understand, we should use anti corruption layer (using hexagonal architecture) if our service is communicating with an untrusted third party service. But what should we do, if the communication is between internal services, like in the case above ?
Should microservice B API DTOs directly used as domain objects in microservice A ? This would mean that we will not create separate domain class/object in service A. We treat B's DTO as domain object in service A.
Should we create an adapter layer to convert B's DTO to domain objects in microservice A ?
Should I create domain objects in service A which would also be manual output DTO for service B ?. This is opposite of point 1, where DTOs are treated as domain objects, where as in this case, we are treating domain objects as DTOs
In my opinion, you are mixing two separate concepts:
The "anticorruption layer", which is a strategic DDD pattern
The layers in a layered or onion architecture, which are tactical patterns to separate concerns within an application
The goal of the anticorruption layer is to translate the ubiquitous language from another Bounded Context into your ubiquitous language so that your code doesn't get polluted with concepts not belonging to your Bounded Context.
The goal of the layers in a layered or onion architecture and specifically the contracts between them (the DTOs you are talking about) is to avoid changes in one part of the code (for example, adding or removing a property in a domain object in the Core) to cause issues somewhere else (like accidentally modifying the public API contract).
If I understand it correctly, your two microservices belong to the same Bounded Context. If that is the case, you shouldn't need any anticorruption layer because both microservices share the same ubiquitous language.
Now regarding the options that you propose, I'm not sure I fully understand options 2 and 3, but what I'll say is that if you are doing real microservices, you can't use option 1, as these microservices wouldn't be autonomous, and independently deployable as a change in microservice B's API would require a change and coordinated deployment of microservice A.
So, design your microservices so that you have control over how their parts evolve (be able to make non-breaking changes in their APIs, change storage strategies without having to change the Core, change the core without having to change the API, etc). Of course, if a microservice is very simple, don't over-engineer it.
The answer here is simple - There is no magic to make breaking changes non-breaking. So treat internal dependencies like you would external. That requires a high discipline on versioning of the APIs and moving versions ahead. The Service A needs to publish new API version specifications ahead of time and Service B needs to be given time to implement those. Depending on the size of your organization it can be useful to support multiple API versions concurrently (for a while that is). You may also want Team A to be monitoring usage of their API versions (e.g. how many customers are still using an old version of the API etc.).
I would not put a high focus on how exactly your API specification is documented or which technology is used to generate client bindings (if at all). This should be up to the service A owners and service B should be able to adopt whatever being offered. In general I would put the focus on simplicity and avoid too many additional layers if they don't provide clear benefit. And team B should be free to design their internals as they see best fit (even with respect to data that comes from Service A). The whole point of microservices architecture is to be able to move forward independently.
From what I've understood from Clean Architecture, every layer can directly depend only on internal layers and, related to external layers, only abstractions are allowed to be set as a dependency, with DIP. Following this rule, the Adapters layer is allowed to directly depend on the Application layer and it only can have the Infrastructure layer as a dependency through abstractions. In my conception, that does not make any sense because, in order for an adapter to be able to perform translation between interfaces, it must know in detail which interfaces it is adapting - not knowing details of one side, abstractions on the other side. I've searched for that and didn't find convincing answers.
This question is probably one of the most controversial in Clean Architecture and (as to my understanding) the one where Hexagonal Architecture and Clean Architecture differ the most.
The general concept in Clean Architecture is: the inner layer provides an interface which is implemented by the outer layer. This approach can be followed by the adapters layer as well.
Imagine you want to implement a repository pattern which accesses an SQL database. Then, in the adapters layer you would implement an interface from the use cases layer which is most convenient for the use cases. This interface would probably have APIs rather specific to the needs of the use cases like "GetAllCustomersWithOpenOrders" or "GetOrderHistoryOfCustomer". Now in order to implement these APIs the adapter would need access to the SQL database For that it would again define an interface which is convenient for the adapter, so it would probably define generic CRUD Apis to pass some SQL as string. This interface would then be implemented in the "frameworks and drivers" layer by a class which would then know how to access the database (it would have the connection string and may depend on a vendor specific DB access library).
With this approach the dependency rule of the Clean Architecture is maintained but it would probably raise 2 questions:
Isn't the adapter still "technology dependent" if it builds up SQL strings? Yes it is, but the adapter not necessarily needs to be technology independent, if it is independent from specific frameworks, vendors or external services. We could easily add another adapter which "creates a bridge" between the interface defined in the use cases layer and a document database.
What is the value of such an adapter if we need still one more interface and one more implementation in the frameworks layer? The answer probably heavily depends on the "conceptual difference" between those two interfaces. In the example above the adapter would still contain all the knowledge about the DB schema, how to do the joins and so how to build complex queries. The implementation in the frameworks layer would be very small as it would probably just pass on the SQL query through the vendor specific library to the proper DB instance. Replacing on DB vendor by another would be possible with minimal impact to your application. In other cases the ration might be the other way round and the adapter might only be a "data conversion" layer.
Personally I still didn't found a "silver bullet" to this question so I try to make a "pragmatic" decision, case by case, as I tried to summarize in my blog post: http://www.plainionist.net/Implementing-Clean-Architecture-Frameworks/
Update 2022-11-08
Created a YouTube video which discusses this topic in further depth: Repository Pattern: CORRECT vs. pragmatic? | Clean Architecture
we are currently facing some tough architectural questions about integrating multiple Web Components with individual backend services in a composite web UI to one smooth web application.
There are some constraints an (negotiable) design decisions:
A MicroService should serve it's own frontend (WebComponent), we would like to use HTML Imports to allow including such a WebComponent to the composite UI
A frontend WebComponent needs to be able to recieve live updates/events from it's backend MicroService
The page (sum of Web Components used in the composite UI) shall only use one connection/permanent occupied port to communicate with the backend
I made a sketch representing our abstract / non-technical requirements for further discussion:
As of my understanding, the problem could be rephrased to: How do we
a) concentrate communication on entering
b) distribute communication on exiting
the single transport path on both ends.
This two tasks need to be solved on both sides of the transport path, eg. the backend and the frontend.
For the backend, I am quite hopefull that adopting the BFF pattern as not only described by Sam Newman could serve our needs. The right half (backend) side of the above sketch could then look similar to this:
The transport path might be best served using standardized web technologies, eg. https and websocket (wss) for the most times needed, bidirectional communication. I'm keen to learn about alternatives with an equivalent high adoption rate in the web technology sector.
For the frontend we are currently lacking ideas and knowledge about previously described patterns or frameworks.
The tricky thing is, that multiple basically independent WebComponents need to find together for using the ONE central communication path. If the frontend would be realized by implementing one (big) Angular application for example, we would implement and inject a "BackendConnectorService" (Name to be discussed) and inject in to our various components.
But since we would like to use decoupled Web Components, none such background layer for shared business logic and dependency injection exists. Should we write a proprietary JS-library, which will be loaded to the window-context if not present yet from every of our components, and will be used (by convention) to communicate with the backend?
This would roughly integrate to the sketch like below:
Are we thinking/designing our application wrong?
I'm thankful for every reasonable idea or hint for a proven pattern/framework.
Also your view on the problem and the architecture might be helpful to circumnavigate the issues we are facing right now.
I would go with same approach you are using on the backend for the frontend.
Create an HTTP or WS gateway, that frontend components will poll with requests. It will connect to backend BFF and there you solved all your problems. Anytime you want to swap your components, transfer or architecture, one is not dependent on the other.
I try to work with go-kit (gokit.io) and to build real-work application with it.
I look through examples. These examples are great. But I do not understand how to do services to service communications / data transfers in go-kit framework.
I can see "real-world" shipping app, but I do not understand how it could be "real world" micro-services. I can see in sources that, for example, they build the booking service just passing foreign repositories into service
type service struct {
cargoRepository cargo.Repository
locationRepository location.Repository
routingService routing.Service
handlingEventRepository cargo.HandlingEventRepository
}
and later they get data from repositories (this repository belongs to foreign micro-service) just calling the method:
locationRepository.Find(...)
Could someone please explain me:
how to build micro-service to micro-service communications in go-kit framework? Just show me the way / pattern, please. I do not understand it at all.
I see it as they just share direct access to data. But in real world micro-services, I expected that micro-services will communicate to each other to get needed data. And I do not understand how to do it in go-kit framework.
I'm the author of the shipping example. Sorry for not seeing your question earlier.
This particular example needs a bit of explanation. It is an example based on tactical patterns from Domain Driven Design, which means that we need to understand what we are talking about when we're referring to a service.
There are application services that deal with the use cases offered by the application, e.g. booking.Service. There are domain services that reside in the domain layer and provides your domain with concepts that aren't necessarily bound to a domain object. In the shipping example, routing.Service is a domain service whose implementation actually queries another application, in this case it talks to this routing service.
Application and domain services are merely ways of organizing our code. Putting it differently, these services communicate within a process, while microservices typically communicate over a network using some form of common transport, e.g. JSON, gRPC and so on.
Coming back to your question, what I believe you are looking for is the implementation of the routing.Service which you can find here.
The proxy service used here is explained under Client-side endpoints on this page, and is used to make requests from your application to another.
If you want more detail, I wrote a blog post on the subject a while ago.
I've been using MVC for a long time and heard about the "Service" layer (for example in Java web project) and I've been wondering if that is a real architectural pattern given I can't find a lot of information about it.
The idea of MVCS is to have a Service layer between the controller and the model, to encapsulate all the business logic that could be in the controller. That way, the controllers are just there to forward and control the execution. And you can call a Service in many controllers (for example, a website and a webservice), without duplicating code.
The service layer can be interpreted a lot of ways, but it's usually where you have your core business processing logic, and sits below your MVC architecture, but above your data access architecture.
For example, you layer of a complete system may look like this:
View Layer: Your MVC framework & code of choice
Service Layer: Your Controller will call this layer's objects to get or update Models, or other requests.
Data Access Objects: These are abstractions that your service layer will call to get/update the data it needs. This layer will generally either call a Database or some other system (eg: LDAP server, web service, or NoSql-type DB)
The service layer would then be responsible for:
Retrieving and creating your 'Model' from various data sources (or data access objects).
Updating values across various repositories/resources.
Performing application-specific logic and manipulations, etc.
The Model you use in your MVC may or may not come from your services. You may want to take the results your service gives you and manipulate them into a model that's more specific to your medium (eg: a web page).
I had been thinking of this pattern myself without seeing any reference to this any where else and searched Google and found your Question here :)
Even today there is not much any body talking about or posting about the
View-Controller Service Pattern.
Thought to let you know other are thinking the same and the image above is how I view how it should be.
Currently I am using it in a project I am working on now.
I have it in Modules with each layers in the image above with in it's own self contained Module.
The Services layer is the "connector" "middleman" "server side Controller" in that what the "client" side Controller does for the client, the "Service" does for the server.
In other words the Client side "Controller" only "talks" with the "Service" aka Server Side Controller.
Controller ---> Requests and Receive from the <----- Service Layer
The Service layer fetches or give information to the layers on the server side that needs it.
By itself the Service does not do anything but connect the server layers with what they need.
Here is a code sample:
I have been using the MVCS pattern for years and I didn't know anyone else did as I couldn't find any solid info on the web. I started using it instinctively if you like and it's never let me down for Laravel projects. I'd say it's a very maintainable solution to mid sized projects, especially when working in an agile environment where business logic changes on the constant. Having that separation of concern is very handy.
Saying this, I found the service layer to be unnecessary for small projects or prototypes and what not. I've made the mistake of over complicating the project when making prototypes and it just ultimately means it takes longer to get your idea out. If you're serious about maintaining the project in the mid term then MVCS is a perfect solution IMO.