How can Clean Architecture's interface adapters adapt interfaces if they cannot know the details of the infrastructure they are adapting? - clean-architecture

From what I've understood from Clean Architecture, every layer can directly depend only on internal layers and, related to external layers, only abstractions are allowed to be set as a dependency, with DIP. Following this rule, the Adapters layer is allowed to directly depend on the Application layer and it only can have the Infrastructure layer as a dependency through abstractions. In my conception, that does not make any sense because, in order for an adapter to be able to perform translation between interfaces, it must know in detail which interfaces it is adapting - not knowing details of one side, abstractions on the other side. I've searched for that and didn't find convincing answers.

This question is probably one of the most controversial in Clean Architecture and (as to my understanding) the one where Hexagonal Architecture and Clean Architecture differ the most.
The general concept in Clean Architecture is: the inner layer provides an interface which is implemented by the outer layer. This approach can be followed by the adapters layer as well.
Imagine you want to implement a repository pattern which accesses an SQL database. Then, in the adapters layer you would implement an interface from the use cases layer which is most convenient for the use cases. This interface would probably have APIs rather specific to the needs of the use cases like "GetAllCustomersWithOpenOrders" or "GetOrderHistoryOfCustomer". Now in order to implement these APIs the adapter would need access to the SQL database For that it would again define an interface which is convenient for the adapter, so it would probably define generic CRUD Apis to pass some SQL as string. This interface would then be implemented in the "frameworks and drivers" layer by a class which would then know how to access the database (it would have the connection string and may depend on a vendor specific DB access library).
With this approach the dependency rule of the Clean Architecture is maintained but it would probably raise 2 questions:
Isn't the adapter still "technology dependent" if it builds up SQL strings? Yes it is, but the adapter not necessarily needs to be technology independent, if it is independent from specific frameworks, vendors or external services. We could easily add another adapter which "creates a bridge" between the interface defined in the use cases layer and a document database.
What is the value of such an adapter if we need still one more interface and one more implementation in the frameworks layer? The answer probably heavily depends on the "conceptual difference" between those two interfaces. In the example above the adapter would still contain all the knowledge about the DB schema, how to do the joins and so how to build complex queries. The implementation in the frameworks layer would be very small as it would probably just pass on the SQL query through the vendor specific library to the proper DB instance. Replacing on DB vendor by another would be possible with minimal impact to your application. In other cases the ration might be the other way round and the adapter might only be a "data conversion" layer.
Personally I still didn't found a "silver bullet" to this question so I try to make a "pragmatic" decision, case by case, as I tried to summarize in my blog post: http://www.plainionist.net/Implementing-Clean-Architecture-Frameworks/
Update 2022-11-08
Created a YouTube video which discusses this topic in further depth: Repository Pattern: CORRECT vs. pragmatic? | Clean Architecture

Related

Microservices communication within bounded context

As a part of our DDD design, we are working on a bounded context and have identified two microservices A and B.
Service A needs to make calls to Service B via REST API. Service B already provides open API spec on how to get any data. We use openapi generator to auto generate client side DTOs.
There is exactly a one-to-one mapping between B DTO to A's domain object.
As far as I understand, we should use anti corruption layer (using hexagonal architecture) if our service is communicating with an untrusted third party service. But what should we do, if the communication is between internal services, like in the case above ?
Should microservice B API DTOs directly used as domain objects in microservice A ? This would mean that we will not create separate domain class/object in service A. We treat B's DTO as domain object in service A.
Should we create an adapter layer to convert B's DTO to domain objects in microservice A ?
Should I create domain objects in service A which would also be manual output DTO for service B ?. This is opposite of point 1, where DTOs are treated as domain objects, where as in this case, we are treating domain objects as DTOs
In my opinion, you are mixing two separate concepts:
The "anticorruption layer", which is a strategic DDD pattern
The layers in a layered or onion architecture, which are tactical patterns to separate concerns within an application
The goal of the anticorruption layer is to translate the ubiquitous language from another Bounded Context into your ubiquitous language so that your code doesn't get polluted with concepts not belonging to your Bounded Context.
The goal of the layers in a layered or onion architecture and specifically the contracts between them (the DTOs you are talking about) is to avoid changes in one part of the code (for example, adding or removing a property in a domain object in the Core) to cause issues somewhere else (like accidentally modifying the public API contract).
If I understand it correctly, your two microservices belong to the same Bounded Context. If that is the case, you shouldn't need any anticorruption layer because both microservices share the same ubiquitous language.
Now regarding the options that you propose, I'm not sure I fully understand options 2 and 3, but what I'll say is that if you are doing real microservices, you can't use option 1, as these microservices wouldn't be autonomous, and independently deployable as a change in microservice B's API would require a change and coordinated deployment of microservice A.
So, design your microservices so that you have control over how their parts evolve (be able to make non-breaking changes in their APIs, change storage strategies without having to change the Core, change the core without having to change the API, etc). Of course, if a microservice is very simple, don't over-engineer it.
The answer here is simple - There is no magic to make breaking changes non-breaking. So treat internal dependencies like you would external. That requires a high discipline on versioning of the APIs and moving versions ahead. The Service A needs to publish new API version specifications ahead of time and Service B needs to be given time to implement those. Depending on the size of your organization it can be useful to support multiple API versions concurrently (for a while that is). You may also want Team A to be monitoring usage of their API versions (e.g. how many customers are still using an old version of the API etc.).
I would not put a high focus on how exactly your API specification is documented or which technology is used to generate client bindings (if at all). This should be up to the service A owners and service B should be able to adopt whatever being offered. In general I would put the focus on simplicity and avoid too many additional layers if they don't provide clear benefit. And team B should be free to design their internals as they see best fit (even with respect to data that comes from Service A). The whole point of microservices architecture is to be able to move forward independently.

Dependency between adapters in hexagonal architecture Spring Boot

I've been trying to refactor a brand new project to follow the hexagonal architecture and ddd patterns.
This is the structure of my domain. I have files and customer data. Entity wise this makes sense to be separated. The "facade" objects connect the ports with the domain. Quick example:
Controller (application layer) --uses--> Facade --uses--> Ports <--implement-- Adapters (infrastructure layer)
The problem I have is I have a third adapter (not in the picture) that is an external OCR app. It's an external client (we use a feign client to connect their API) and it provides customer data (first adapter), but also serves us with the raw data of images (second adapter).
My first two adapters have entities, repos and databases on our local systems but, this third one, to me makes sense given the theory behind hexagonal architecture, to be separated in its own adapter.
But then how do I use it from my other two adapters? Should the three of them be in the same adapter since they depend on each other? CustomerData and File have a One To Many relationship as well so maybe it makes sense?
I have only implemented the File part so far and have yet to refactor the CustomerData part since I'm trying to wrap my head around the concepts first.
I've seen a lot of articles but most of them are really simple with no real world examples and they have clearly separated domains.
Thanks a lot for the clarification in advance.
In lack of a better idea, since the interface ports are beans implemented by the facades, I'm wiring the ports I need in the other domain's facades and using them the same way as if it was a controller of that same domain. The diagram would be something similar to:
Facade (domain1) --uses--> Port (of domain2) <--implement-- Adapters (infrastructure layer)
Edit:
I've found out a very extensive article that is very useful to understand hexagonal architecture but goes even deeper.
Long story short, I'll copy the relevant part:
Triggering logic in other components
When one of our components (component B) needs to do something whenever something else happens in another component (component A), we can not simply make a direct call from component A to a class/method in component B because A would then be coupled to B.
However we can make A use an event dispatcher to dispatch an application event that will be delivered to any component listening to it, including B, and the event listener in B will trigger the desired action. This means that component A will depend on an event dispatcher, but it will be decoupled from B.
Hexagonal Architecture doesn't forbid the relationships between adapters.
Anyway, usually we will have a port for each external actor interacting with our business logic, and an adapter to translate to/from the actor.
You can take a look at this:
https://jmgarridopaz.github.io/content/hexagonalarchitecture-ig/chapter1.html

Repository pattern and web api

Hi i learned web api 2 recently and i am working on a sample project now.I am following layered architecture in my project.This is the flow
controller=>Business Layer=>Data Layer
Now i read some article about the repository pattern which sound better nowadays.
i saw the flow like
controller=>services=>repository
Is there any significant difference between the two flows?
As a beginner which style of architecture should i flow?
Can someone help me to understand these patterns?
When you use services with repository you should make it generic and use Unit of Work (UOW) so you don't repeat yourself with CRUD actions on every entity. Then add some object relational mapper (ORM) like Entity Framework. It is prefered that you also use domain objects for storing your logic there, and make the services return data transfer objects (DTO), because best practice is to keep your services small and clean. Then you also need to include some mapping tool/logic to map DTO to Domain objects.
And you see where this is all going. Suddenly you have far more complexity and obstructions than you need. Because only one thing that customer care about is quality of your product not some fancy architecture.
But as project grow it is much more important to have proper architecture, automatic tests etc. , so it is valid in large projects. If you want to experiment and learn it is great opportunity, but if you have running application in production, i would rather not make that big changes.
TLDR: they are both valid

Prism modularity practices

I'm studying Prism and need to create a small demo app. I have some design questions. The differences between attitudes might be small, but I need to apply the practices to a large scale project later, so I'm trying to think ahead.
Assuming the classical DB related scenario - I need to get a list of employees and a double click on a list item gets extra information for that employee: Should the data access project be a module, or is a project accessed via repository pattern a better solution? What about large scale project, when the DB is more than one table and provides, say, information about employees, sales, companies etc.?
I'm currently considering to use the DataAccess module as a stand alone module, and have defined its interface in the Infrastructure project as well as its return type (EmployeeInformation). This means that both my DataAccess module and my application have to reference the Infrastructure project. Is this a good way to go?
I'm accessing said DataAccess module using ServiceLocator (MEF) from my application. Should the ServiceLocator be accessed by parts of the application, or is it meant to be used in the initialization section only?
Thanks.
A module is needed and makes sense when it contains ine part of the application that can live on it's own. This can be parts of an application the only several people need or are allowed to use, e.g. the user management module only administrators are allowed to access. But your data access layer is not that kind of isolated functionality that usually goes into a module. It is better placed in a common assembly the real modules can use. The problem here is that all modules depend on this DAL assembly, so have the task of updating your DAL in mind when designing your application (downward compatibility).
Usually there is no problem to have types that are broadly used reside in a common assembly. But this is not the infrastructure assembly. Infrastructure, as the word implies, provides services to have the modules work together. Your common types should go into something like YourNamespace.Types or YourNamespace.Client.Base or ...
This is a topic in many arguments and still unclear (at least from my point of view). Purists of Dependency Injection say it should only be used during initialization. Pragmatists are using the ServiceLocator all over their application.

Linq to SQL layers/architecture?

I am sorry for my question may looking a old repetitive questions but I as I am starting Linq to SQL I want to discuss how many layers (architecture) should I use ?
I am working on web mostly web sites and small to medium scale web applications. I understand dividing application into layers help its maintainanace and enhancement but frankly I want some balance way which give me rapid development and code reuse-ability as well. I cannot spare so much time on unwanted management of layers.
Before I was using 4 layers (business objects, BLL,DAL and user itnerface.) I became confuse on it as different people have described different layers. Please guide me what and how many layers I should use ? Thanks
Don't use the layer architecture. Use the onion architecture.
The most important aspect from architecture perspective is the separation of concerns. Separation of concerns leads to clean code, easy maintenance, extensible, etc. That said, I recommend to base your decisions based on the following criteria:
Try to architect your system in a loosely coupled way. Use messaging instead of RPC as messaging is reliable, scalable, asychronous, loosely coupled, etc. You can either use MSMQ or NServiceBus (for service bus based architecture).
Create layers based on the separation of concerns concept. For e.g. you can go for typical 3 layered architecture which will have just UI layer, business logic layer (business rules+workflow) and data access layer or more granular such as UI layer, services layer (facade), business logic layer, data access layer, etc. Using IoC / Dependency injection will make life easy as none of the layers will have direct dependency. Moreover, it promotes unit testing easy as you inject mocks instead of the real implementations for the unit test. There are so many IoC frameworks available (NInject, Autofac, Castle Windsor, Structure Map, etc...)
Try to use EF instead of Linq to SQL as the later works only with SQL, while the EF works with any database. Moreover, in my opinion, EF is where microsoft is innovating and I would assume that Linq to SQL may be retired one day.
Great question haansi. This is something that I have wrestled with quite often when building small to medium sized sites. Finding that balance of creating a architecture that gives you the greatest flexibility and allows for rapid deployment is what I think we all need to strive for in all of our work.
With that being said, I have found that using the Repository Pattern to be quite helpful with LINQ to SQL projects. I couple that with the Model View Presenter pattern (for WebForms or other projects) and it provides a great foundation for reuse with minimal layers.
My Webform calls a Presenter class, which in turn is responsible for populating the View. To populate that View the Presenter can call N number of Repositories. The Repository is where you encapsulate your DataContext class and your LINQ to SQL calls. These calls return the model classes.
One huge benefit to this regardless of the size of the app is that get great re-use out of your Repository, you get to maximize the use of LINQ and you have used some patterns that other software developers could easily read and support.
Another big benefit is that you now have created a simple architecture that can benefit from using Unit Testing to test from the Presenter back to the Repository without a ton of effort.
Good luck!
I decided to use one layer (DAL + BLL) for small projects and for large applications will use Different layers for DAL & BLL. I will use Linq in DAL and funtiosn will return IQueryable.

Resources