What is the use of service layer in Spring Boot applications? - spring

I am new to Spring Boot and I am creating a RESTful API with no UI.
I am thinking if I should use business service and call repository from there or just call the repository directly from my REST controller?

Separation of concerns is the key:
The controller (presentation layer, or port) is a protocol interface which exposes application functionality as RESTful web services. It should to that and nothing more.
The repository (persistence layer, or adapter) abstracts persistence operations: find (by id or other criteria), save (create, update) and delete records. It should to that and nothing more.
The service layer (domain) contains your business logic. It defines which functionalities you provide, how they are accessed, and what to pass and get in return - independent on any port (of which there may be multiple: web services, message queues, scheduled events) and independent on its internal workings (it's nobody's business that the service uses the repository, or even how data is represented in a repository). The service layer may translate 1:1 from the repositiory data, or may apply filtering, transformation or aggregation of additional data.
The business logic may start simple in the beginning, and offer not more that simple CRUD operations, but that doesn't mean it will forever stay this way. As soon as you need to deal with access rights, it's no longer a matter of routing requests from the controller directly to the repository, but checking access and filtering data as well. Requests may need validation and consistency checks before hitting the database, rules and additional operations may be applied, so your services get more value over time.
Even for simple CRUD cases, I'd introduce a service layer, which at least translates from DTOs to Entities and vice versa.
Keep your controllers/repositories (or ports and adapters) stupid, and your services smart, and you get a maintainable and well-testable solution.

Service layer is not a concept exclusive from Spring Boot. It's a software architectural term and frequently referred as a pattern. Simple applications may skip the service layer. In practical terms, nothing stops you from invoking a repository method from the controller layer.
But, I strongly advise the usage of a service layer, as it is primarily meant to define the application boundaries. The service layer responsibilities include (but are not limited to):
Encapsulating the business logic implementation;
Centralizing data access;
Defining where the transactions begin/end.
Quoting the Service Layer pattern from Martin Fowler's Catalog of Patterns of Enterprise Application Architecture:
A Service Layer defines an application's boundary and its set of available operations from the perspective of interfacing client layers. It encapsulates the application's business logic, controlling transactions and coor-dinating responses in the implementation of its operations.

Related

Clean Architecture: Must gateway to gateway communication go via a use case?

In my application the network implementation (particularly a HTTP request interceptor) requires an authorization token stored in local persistent storage.
Now, following clean architecture, both network implementation and persistent storage are in the outer most, the framework layer.
The access token can be retrieved via a repository (where the implementation is in the gateway/interface adapters layer and the repository interface is in the inner-most/entity layer.
How should the control flow be in my scenario? Should it go
via a use case, since, in the ring diagram, the use case layer entirely encapsulates the entity layer
via the repository interface and skip the use case layer?
As counter arguments of 1 I see that
I require a "get access token" use case, which is not a real use case from the user perspective.
If the use case layer always encapsulates the entity layer, is it ok that the repository interface is in the entity layer but the implementation only in the interface adapters layer (skipping the use case layer)?
As counter argument of 2 I see that the use case layer is only between entities and user facing interfaces (controllers, presenters) but not between entities and data gateways - meaning the use cases layer does not actually fully encapsulate the entity layer, which would mean the ring representation of the layers is not accurate.
As I understand Clean architecture and have realized it in my applications the repository interfaces are always in the use case layer not in the entities layer.
Furthermore I would keep access token handling in the adapters layer if access token handling is not part of your applications core business logic.

Spring boot Distrubuted transaction

We need to find best way to address distributed transaction management in our microservices architecture.
Here is the Problem Statement.
We have one Composite microservice which shall interact with underlying other 2 Atomic microservices (Which are meant for specific purpose obviously) and have separate database e.g. We can consider these 2 microservices as
STUDENT_SERVICE (STU_DB)
TEACHER_SERVICE (TEACHR_DB)
Here in Composite Service Usecase is like user (Administrator) can assign a Teacher to a student for the specific course etc.
I wonder how can we address this problem in one transaction as each servie (STUDENT_SERVICE and TEACHER_SERVICE ) has separate DB and all should happen in one transaction either commit or rollback.
Since those 2 services are separate and I see JTA would not be of help as it is meant for having these 2 applications (services) deployed on same application server!
I have opted out JTA as mentioned above
//Pseudo Code
class CompositeService{
AssignStaff(resquest){
//txn Start
updateStudentServiceAPI(request);
UpdateTeacherServiceAPI(request);
//txn End
}
}
System should be in consistent state after api execution
This is a tricky question even it's not obvious at the first sight.
The functionality you call for is understood to be an anti-pattern for microservice architecture.
Microservice architecture is in general a distributed system. Transactions in distributed systems are hard (see https://martin.kleppmann.com/2015/09/26/transactions-at-strange-loop.html). Your application consists from two services.
The JTA is a Java API for ACID style transactions. ACID transactions usually requires locks to be established in databases. As the transaction spans over multiple services (in your case there are two) then a failure of one service can block processing of the other service. In such case you are loosing the advantage of the microservice architecture - loose coupling and Independence of the services. You can end up of building a distributed monolith (see nice article https://blog.christianposta.com/microservices/the-hardest-part-about-microservices-data/).
Btw. there are several discussion on the topic of transactions in microservices here at Stackoverflow. Just search or check e.g.
Distributed transactions in microservices
Transactions in microservices
Transactions across REST microservices?
What are your options
(disclaimer: I'm a developer for http://narayana.io and presented options are from perspective of Java EE and Narayana. There could be other projects providing similar functionality. Plus, even Narayana integrates nicely with Spring you will possibly need to handle some integration issues.)
you really need to run the ACID style transaction in your project - aka you insists you need the transaction behaviour in way you describe. Then you need to span transaction over services. Then if services communicate over REST you can consider for example Narayana REST-AT (http://jbossts.blogspot.com/2011/03/rest-cloud-and-transactions.html, start looking into quickstart here https://github.com/jbosstm/quickstart/tree/master/rts)
you relax your requirements for atomicity and then you can cosider some transaction model relaxing the consistency (you are fine to be eventual consistent). You can consider for example LRA (https://github.com/eclipse/microprofile-lra/blob/master/spec/src/main/asciidoc/microprofile-lra-spec.adoc). (Unfortunately the spec and implementation is still not ready but PoC could be run on current state.)
you want to use a different approach for transaction processing completely. Then you can investigate on event sourcing. You would deploy e.g. Apache Kafka and send events for updates to the event store. Each service will reads those events and updates independently the DBs.

Why we need to write business logic in separate service layer instead of writing in controller itself?

Whats the use of creating different layer i.e. Service layer for business logic implementation instead of implementing that business logic in Controller itself
It is because of Separation of concerns.
In Controller which is primarily interested in handling incoming http request and responding back to that request. We are worried about things related to handling stuff related to a given communication channel.
You can expose an rest api as well soap api or you may have various formats int which you would want to share the data. Biz logic as such does not care about how you are communicating this data to end users. So you take it out and keep in one common place that only deals with biz logic while the controller class just calls this. You can then have a rest controller and soap controller answering request via same piece of biz logic code.
What you do in controller is validate the request call the service and handle exception in way you want it to be exposed to the caller.
It depends on your architecture. If you're using some of the Domain Driven Design principles, there would be little if any logic in the controllers/api. The controllers would be used to coordinate/ manage the communication between the domain services (i.e. AccountService), repositories (i.e. AccountRepo), and or infrastructure services (i.e. EmailService). All logic would be in the models and services. Some advantages are...
1. Unit testable code
2. The code better models the business problem (controllers mean nothing to the business problem)
3. Controllers don't become a place to jam a lot of business logic into and result into a tangled mess
4. And more...
Of course this all depends on whether maintainability is a priority

Is the Controller in MVC considered an application service for DDD?

I am applying DDD for the M part of my MVC and after some research (studying up!), I have come to the realization that I need my controller to be interacting with domain services (in the model). This would make my controller the consumer of the domain services and therefore an application service (in DDD terms). Is this accurate? Is there a difference between a controller and what DD defines as an application service?
The application layer is somewhere between the domain layer and the presentation layer. The controller is part of the presentation layer, sends commands or queries to the application layer, where the application services execute them using the services and objects of the domain model. So controllers are different from the application services, and they are probably bound to the actual communication form, e.g. HTTP. You should not call domain services from controllers directly, this might be a sign of misplaced code.
Domain Driven Design: Domain Service, Application Service
Domain Services : Encapsulates business logic that doesn't naturally fit within a domain object, and are NOT typical CRUD operations -
those would belong to a Repository.
Application Services : Used by external consumers to talk to your system (think Web Services). If consumers need access to CRUD
operations, they would be exposed here.
So your service is probably an application service and not a domain service, or some part app services, some part domain service. You should check and refactor your code. I guess after 4 years this does not matter, but I had the same thoughts by an application I am currently developing. This app might be too small to use DDD on it, so confusing controllers with app services is a sign of overengineering here.
It is an interesting question when to start add more layers. I think every app should start with some kind of domain model and adapters to connect to that domain model. So if the app is simple enough, adding more than 2 layers might be not necessary. But that's just a thought, I am not that experienced with DDD.
The controller is not considered a service in DDD. The controllers operate in the UI tier. The application services gets data from the DB, validates data, passes data to client (MVC could be a client but so could a request coming from a winforms app) etc etc.
All the controller is doing is servicing requests from the UI. Its not part of the application domain.
In DDD Reference controllers are not mentioned at all. So I think from DDD POV the answer is undefined. So I would consider more practical question: "Do we need to separate Controllers and Application Service"?
Pros:
Separating input processing from use case implementation.
Cons:
Extra layer of indirection that complicates understanding of simpler cases. (It is also annoying to implement some trivial thing by pulling data though many layers without anything actually being done).
So I would choose separation of controller and application service if input processing obfuscates use case logic. I would choose to keep use case logic in controller if both are simple enough so that you can easily see in code use cases separately from input processing.
A Layered Architecture splits the application up into UI-Layer, App-Layer, Domain Layer and Infrastructure Layer (Vaugn Vernons Implementing Domain-Driven Design (location 2901)). The controller falls in "Application Layer" of this broader design architecture and will therefore interact directly with the domain services in the model and is considered an application service. Not only that, it'll will obviously also use the entities and aggregates as available.

Replacing Tuxedo calls with JDBC

I have been tasked with replacing some Tuxedo services with the equivalent JDBC calls.
Considering a single Tuxedo service, I have started by creating a JDBC DAO, which implements the same interface as the existing Tuxedo DAO. I am calling methods on this from a new Service layer. I am planning to use the Spring #Transactional annotation on my Service layer to handle JDBC transactions.
Tuxedo handles transactions internally, hence a single Tuxedo DAO method call is comparable to multiple method calls on a JDBC DAO, which would be called from the new Service layer.
Given the above it makes sense to me that the Tuxedo DAO should really be a service level entity. Does that make sense?
Any thoughts on the best way to lay this out from a Service/DAO layer perspective would be appreciated. I need to keep the Tuxedo DAO for legacy purposes, but refactoring this into the Service layer should not be an issue if required.
Thanks
Jay
Well,
It makes a lot of sense. In fact, a Tuxedo Service (depending if it's only a DB access or if it has some more business logic) could be replaced by a simple DB-DAO or by some sort of service (EJB, WebService, etc. depending on the standard technologies used in the enterprise).
I would try starting by classifying the services so you can decide what to do with each one of them and maybe fix some strategies. Something like "DB-DAO", "OTHER-DATASTORE-DAO", "MORE COMPLEX SERVICE".
After you did this work, you can build your direct DAO's and services. If you decide to deploy the services on a different infrastructure (scaling issues or just because many applications will use them and you want to keep a clean visibility), you can still write DAOs to consume them and respect the original calling interface, but with a new implementation behind.
Regards

Resources