Using a Transaction between Two different Ports in Hexagonal Architecture - spring-boot

We're using a hexagonal architecture in one of our microservice. Spring Boot is the framework implementing the service.
For a use case, we need to update a database table (relational) and send a message to a Kafka topic. Quite usual. We don't want to use any CDC or Outbox Pattern, but we want to rely on the transactional manager of Spring.
However, we need some clarification about how to implement this scenario. We don't want to have any dependency on the framework in the application layer (service/use case). How can we implement transactionality? Should we use the #Transactional annotation on the controller? Should we have a single output port, for example, saveAndPublish (I don't like this solution), and put the transaction management in the implementing adapter?
I know that every use case should be transactional by design.

Related

Transaction management in microservices

We are rewriting legacy app using microservices. Each microservice has its own DB. There are certain api calls that require to call another microservice and persist data into both DBs. How to implement distributed transaction management effectively in this case?
Since we are not migrated completely to the new micro services environment, we still writeback data to old monolith. For this when an microservice end point is called, we call monolith service from microservice api to writeback same data. How to deal with the same problem in this case as well.
Thanks in advance.
There are different distributer transaction frameworks usually included and maintained as part of heavy application servers like JBoss and WebLogic.
The standard usually used by such services is Jakarta Transactions (JTA; formerly Java Transaction API).
Tomcat and Spring don't support distributed transactions out-of-the-box. You can add this functionality using third party framework like Atomikos (just googled, I've never used it).
But remember, microservice with JTA ist not "micro" anymore :-)
Here is a small overview over available technologies and possible workarounds:
https://www.baeldung.com/transactions-across-microservices
If you can afford to write to the legacy system later (i.e. allow some latency between updating the microservice and the legacy system) you can use the outbox pattern.
Essentially that means that you write to the microservice database in a transactional way both to the tables you usually write and an additional "outbox" table of changes to apply and then have a separate process that reads that table and updates the legacy system.
You can also achieve something similar with a change data capture mechanism on the db used in the microservice(s)
Check out this answer on "Why is 2-phase commit not suitable for a microservices architecture?": https://stackoverflow.com/a/55258458/3794744

What is the use of service layer in Spring Boot applications?

I am new to Spring Boot and I am creating a RESTful API with no UI.
I am thinking if I should use business service and call repository from there or just call the repository directly from my REST controller?
Separation of concerns is the key:
The controller (presentation layer, or port) is a protocol interface which exposes application functionality as RESTful web services. It should to that and nothing more.
The repository (persistence layer, or adapter) abstracts persistence operations: find (by id or other criteria), save (create, update) and delete records. It should to that and nothing more.
The service layer (domain) contains your business logic. It defines which functionalities you provide, how they are accessed, and what to pass and get in return - independent on any port (of which there may be multiple: web services, message queues, scheduled events) and independent on its internal workings (it's nobody's business that the service uses the repository, or even how data is represented in a repository). The service layer may translate 1:1 from the repositiory data, or may apply filtering, transformation or aggregation of additional data.
The business logic may start simple in the beginning, and offer not more that simple CRUD operations, but that doesn't mean it will forever stay this way. As soon as you need to deal with access rights, it's no longer a matter of routing requests from the controller directly to the repository, but checking access and filtering data as well. Requests may need validation and consistency checks before hitting the database, rules and additional operations may be applied, so your services get more value over time.
Even for simple CRUD cases, I'd introduce a service layer, which at least translates from DTOs to Entities and vice versa.
Keep your controllers/repositories (or ports and adapters) stupid, and your services smart, and you get a maintainable and well-testable solution.
Service layer is not a concept exclusive from Spring Boot. It's a software architectural term and frequently referred as a pattern. Simple applications may skip the service layer. In practical terms, nothing stops you from invoking a repository method from the controller layer.
But, I strongly advise the usage of a service layer, as it is primarily meant to define the application boundaries. The service layer responsibilities include (but are not limited to):
Encapsulating the business logic implementation;
Centralizing data access;
Defining where the transactions begin/end.
Quoting the Service Layer pattern from Martin Fowler's Catalog of Patterns of Enterprise Application Architecture:
A Service Layer defines an application's boundary and its set of available operations from the perspective of interfacing client layers. It encapsulates the application's business logic, controlling transactions and coor-dinating responses in the implementation of its operations.

Spring boot Distrubuted transaction

We need to find best way to address distributed transaction management in our microservices architecture.
Here is the Problem Statement.
We have one Composite microservice which shall interact with underlying other 2 Atomic microservices (Which are meant for specific purpose obviously) and have separate database e.g. We can consider these 2 microservices as
STUDENT_SERVICE (STU_DB)
TEACHER_SERVICE (TEACHR_DB)
Here in Composite Service Usecase is like user (Administrator) can assign a Teacher to a student for the specific course etc.
I wonder how can we address this problem in one transaction as each servie (STUDENT_SERVICE and TEACHER_SERVICE ) has separate DB and all should happen in one transaction either commit or rollback.
Since those 2 services are separate and I see JTA would not be of help as it is meant for having these 2 applications (services) deployed on same application server!
I have opted out JTA as mentioned above
//Pseudo Code
class CompositeService{
AssignStaff(resquest){
//txn Start
updateStudentServiceAPI(request);
UpdateTeacherServiceAPI(request);
//txn End
}
}
System should be in consistent state after api execution
This is a tricky question even it's not obvious at the first sight.
The functionality you call for is understood to be an anti-pattern for microservice architecture.
Microservice architecture is in general a distributed system. Transactions in distributed systems are hard (see https://martin.kleppmann.com/2015/09/26/transactions-at-strange-loop.html). Your application consists from two services.
The JTA is a Java API for ACID style transactions. ACID transactions usually requires locks to be established in databases. As the transaction spans over multiple services (in your case there are two) then a failure of one service can block processing of the other service. In such case you are loosing the advantage of the microservice architecture - loose coupling and Independence of the services. You can end up of building a distributed monolith (see nice article https://blog.christianposta.com/microservices/the-hardest-part-about-microservices-data/).
Btw. there are several discussion on the topic of transactions in microservices here at Stackoverflow. Just search or check e.g.
Distributed transactions in microservices
Transactions in microservices
Transactions across REST microservices?
What are your options
(disclaimer: I'm a developer for http://narayana.io and presented options are from perspective of Java EE and Narayana. There could be other projects providing similar functionality. Plus, even Narayana integrates nicely with Spring you will possibly need to handle some integration issues.)
you really need to run the ACID style transaction in your project - aka you insists you need the transaction behaviour in way you describe. Then you need to span transaction over services. Then if services communicate over REST you can consider for example Narayana REST-AT (http://jbossts.blogspot.com/2011/03/rest-cloud-and-transactions.html, start looking into quickstart here https://github.com/jbosstm/quickstart/tree/master/rts)
you relax your requirements for atomicity and then you can cosider some transaction model relaxing the consistency (you are fine to be eventual consistent). You can consider for example LRA (https://github.com/eclipse/microprofile-lra/blob/master/spec/src/main/asciidoc/microprofile-lra-spec.adoc). (Unfortunately the spec and implementation is still not ready but PoC could be run on current state.)
you want to use a different approach for transaction processing completely. Then you can investigate on event sourcing. You would deploy e.g. Apache Kafka and send events for updates to the event store. Each service will reads those events and updates independently the DBs.

Development compromises in using Spring Cloud Stream

The case for event-driven microservices such as Spring Cloud Stream is their asynchronous nature, which I do agree it makes them more scalable
But I have an issue regarding how to code it in a way where I don't lose certain key features that I have access to using synchronous services
In a servlet-based MS, I make full use of servlet context variables and servlet-based Spring autowiring functions
For e.g., I leverage heavily on HTTP headers to carry metadata between microservices without having to impact the payload. But in Spring Cloud Stream using Kafka, Kafka doesn't support message headers of any kind! I lose that immediately if I use SCS. Putting them into the payload causes all sort of changes in my model classes if I define the attributes clearly. Yes, I can use a simple Hashmap to simulate the HTTP header object but it really seems like reinventing the wheel to me.
On the auto-wiring side: I maintain an audit log record per request, which I implement by declaring a request-scoped Hashmap bean and autowiring it into any methods in the Servlet's call stack that needs to append data to the audit log. Basically it's just a global variable to hold some data within a single request. But in SCS, again, I lose that cos bean scopes that leverage on servlets are not available.
So far, there seems to be a lot of trade-offs that I have to make just to make Spring Cloud Stream work for me.
I thought about an alternative approach where I use SCS just to create an entry point but the Source method would just get the event, use a Processor to construct a HTTP request and send the request along to a HTTP endpoint. But, why go through all that trouble then?
Hoping that some more experienced devs would be able to shed some light on how they leverage on SCS.
#feicipet Thanks for the detailed question. let me try to address some of your concerns in the order you have listed them:
+1
+1
I am not sure why you are referring to it as servlet-based instead of Spring-based? Those are features provided by Spring, but read on. . .
Spring Cloud Stream doesn't use Kafka, the end user does while Spring Cloud Stream provides Kafka binder allowing Spring Cloud Stream to integrate with Kafka. Further more, while Kafka indeed did not support headers prior to version 0.11, Spring Cloud Stream always supported and will continue support headers even with Kafka pre-0.11, embedding them in the Message and then extracting them in the consumer side into the proper Message headers completely transparent to the end user. In other words one would assume that Kafka did support headers by simply using Spring Cloud Stream. With Kafka 0.11+ headers are supported natively and we have adjusted to that with the same level of transparency.
So, you don't need to put anything in the payload. Just create an appropriate Message<payload, headers> and SCSt will take care of the rest regardless of the broker (Kafka, Rabbit, Foo etc.).
Yes you do simply due to the fact that as you eluded earlier SCSt promotes an asynchronous and stateless architecture. However, I do not agree that what you are trying to accomplish is un-accomplishable. Rather it is accomplishable the way you are describing, but there are other way to maintain context and I would be more then glad to discuss it as a separate topic.
I would not call them trade-offs, rather difference in the architecture, that has its benefits, but it is a not one-size-fits-all architecture and therefore its viability should be discussed within the context of a concrete use case.
+1. You don't have to separate it as Source and Processor. You can simply create a custom Source app with exposed REST endpoint and custom processing logic. However we are currently working on enhancements i the framework to ensure that you could do the same with the existing starter apps.
Obviously we have touched on many points here and some of them would probably need to be debated further, but I hope this clears up some of your concerns.
Cheers

Distributing business logic across different servers(like JBoss/Glassfish) using Spring and still under one transaction

I am willing to create an example(code) using Spring in which business logic to be distibuted across different servers like JBoss or Glassfish and still under one transaction? First of all is this possible in Spring. I know using EJB has this option. Likewise do we have a similar technique in Spring also? I am looking for Synchronous communication approach and not using asynchronous message oriented middleware. Any help/pointer appreciated.
Thanks
Prakash
Spring has support for RMI or provides its own remoting mechamism HttpInvoker but according to the doc they don't provide any remote transaction propagation.
Similar questions:
Spring Distributed Transaction Involving RMI calls possible?
Transaction propagation in multiple servlet context with multiple data source

Resources