Transaction management of JPA and external API calls - spring-boot

I'm new to spring, started using spring boot for the project. We have an use case of implementing database changes and few external API calls as one transaction. Please suggest, is this possible with the spring #transactional?

Do the API calls need to be part of the transaction?
If the answer is no, I would advise to use TransactionTemplate.doInTransaction() leaving the API requests outside of the Tx.
If you need to make the API requests inside a Tx, I would advise against it, you would be locking DB resources for the duration of those requests.
You can also search and find out more about the eventual consistency model.

Using #Transactional for multiple database changes as one transaction is of course doable with the annotation but not so much for the external API calls. You would have to implement some custom logic for that - there would have to be endpoints to undo your last actions and you would have to implement calling them manually in try-catch block for example. For example, if the external API call creates an item there would also have to be an endpoint to delete an item and so on.
So to summarise - using #Transactional annotation for implementing database changes as one transaction is fine, but not enough for external API calls.

Related

Is there any way to use redis transaction with Spring data redis reactive?

Usual org.springframework.data.redis.core.RedisTemplate have that multi() method, which allow to start a transaction and exec() to commit.
But org.springframework.data.redis.core.ReactiveRedisTemplate does not have that methods.
I searched a lot for an any way on how to use transaction with spring-boot-starter-data-redis-reactive and i found no solutions.
The only way i see it now is manualy create Lettuce Client Bean and use it along side Spring implementation. But that is not handy to have 2 separate redis clients.
Does anyone know how to use redis transaction with spring-boot-starter-data-redis-reactive? could you please write a simple example?

Thread model for Async API implementation using Spring

I am working on the micro-service developed using Spring Boot . I have implemented following layers:
Controller layer: Invoked when user sends API request
Service layer: Processes the request. Either sends request to third-part service or sends request to database
Repository layer: Used to interact with the
database
.
Methods in all of above layers returns the CompletableFuture. I have following questions related to this setup:
Is it good practice to return Completable future from all methods across all layers?
Is it always recommended to use #Async annotation when using CompletableFuture? what happens when I use default fork-join pool to process the requests?
How can I configure the threads for above methods? Will it be a good idea to configure the thread pool per layer? what are other configurations I can consider here?
Which metrics I should focus while optimizing performance for this micro-service?
If the work your application is doing can be done on the request thread without too much latency, I would recommend it. You can always move to an async model if you find that your web server is running out of worker threads.
The #Async annotation is basically helping with scheduling. If you can, use it - it can keep the code free of the references to the thread pool on which the work will be scheduled. As for what thread actually does your async work, that's really up to you. If you can, use your own pool. That will make sure you can add instrumentation and expose configuration options that you may need once your service is running.
Technically you will have two pools in play. One that Spring will use to consume the result of your future, and another that you will use to do the async work. If I recall correctly, Spring Boot will configure its pool if you don't already have one, and will log a warning if you didn't explicitly configure one. As for your worker threads, start simple. Consider using Spring's ThreadPoolTaskExecutor.
Regarding which metrics to monitor, start first by choosing how you will monitor. Using something like Spring Sleuth coupled with Spring Actuator will give you a lot of information out of the box. There are a lot of services that can collect all the metrics actuator generates into time-based databases that you can then use to analyze performance and get some ideas on what to tweak.
One final recommendation is that Spring's Web Flux is designed from the start to be async. It has a learning curve for sure since reactive code is very different from the usual MVC stuff. However, that framework is also thinking about all the questions you are asking so it might be better suited for your application, specially if you want to make everything async by default.

Connectivity issues when calling a controller endpoint, how can we deal with it? [Spring and Kotlin]

I have some service method used in my controller endpoint whose logic is not fully executed when connectivity issues arise (as you would expect). I am looking for a potential approach, rather than using a try and catch block within the service method. What is the recommended way to go? Is there any "rollback" functionality that can be injected in Spring whereby if something happens during the executing of the service logic then it rolls back?
Thank you

Transactions in REST [duplicate]

This question already has answers here:
Transactions in REST?
(13 answers)
Closed 7 years ago.
How can I simulate transactions in REST?
I have just developed the back-end with Jersey, Spring, Spring Data for data access, which connects to a MySQL DB. My goal is to show(like a test) that it can handle database transactions, but can't imagine how to do it.
You would test a RESTful endpoint that does a transaction the same way you would test an endpoint that does not do a transaction - by calling it.
Write an integration test (probably Junit, and make sure it uses the #WebAppConfiguration annotation) that calls your RESTful endpoint(s). In your test, inject (#Autowire or #Resource) your service that contains your endpoint. Call the methods on that service (i.e. the endpoint), passing in fake or generated parameters. In your test, look for the expected behavior. For example, if you accessed a PUT endpoint that was supposed to create a book, using the DAO or access object, try to retrieve it from the database. If the DAO can retrieve it, then the endpoint successfully negotiated the transaction. Similarly, set up a test where the transaction should not go through, and therefore be rolled back, and use the DAO to make sure it did not get in the database.

Replacing Tuxedo calls with JDBC

I have been tasked with replacing some Tuxedo services with the equivalent JDBC calls.
Considering a single Tuxedo service, I have started by creating a JDBC DAO, which implements the same interface as the existing Tuxedo DAO. I am calling methods on this from a new Service layer. I am planning to use the Spring #Transactional annotation on my Service layer to handle JDBC transactions.
Tuxedo handles transactions internally, hence a single Tuxedo DAO method call is comparable to multiple method calls on a JDBC DAO, which would be called from the new Service layer.
Given the above it makes sense to me that the Tuxedo DAO should really be a service level entity. Does that make sense?
Any thoughts on the best way to lay this out from a Service/DAO layer perspective would be appreciated. I need to keep the Tuxedo DAO for legacy purposes, but refactoring this into the Service layer should not be an issue if required.
Thanks
Jay
Well,
It makes a lot of sense. In fact, a Tuxedo Service (depending if it's only a DB access or if it has some more business logic) could be replaced by a simple DB-DAO or by some sort of service (EJB, WebService, etc. depending on the standard technologies used in the enterprise).
I would try starting by classifying the services so you can decide what to do with each one of them and maybe fix some strategies. Something like "DB-DAO", "OTHER-DATASTORE-DAO", "MORE COMPLEX SERVICE".
After you did this work, you can build your direct DAO's and services. If you decide to deploy the services on a different infrastructure (scaling issues or just because many applications will use them and you want to keep a clean visibility), you can still write DAOs to consume them and respect the original calling interface, but with a new implementation behind.
Regards

Resources