I am using Spring Boot and my application is just Monolithic for now, may switch to microservices later.
SCENARIO 1: Here My DB call Does NOT depend on REST Response
#Transactional
class MyService {
public void DBCallNotDependsOnRESTResponse(){
//DB Call
//REST Call, This restcall gives response like 200 "successfull"
}
}
SCENARIO 2: Here My DB call depends on REST Response
#Transactional
class MyService {
public void DBCallDependsOnRESTResponse(){
//REST Call, making a Real Transaction using BrainTree
//DB Call, HERE DB CALL DEPENDS ON REST RESPONSE
}
}
In case of Scenario 1, I have no issues as DB gets rolled back incase REST fails.
BUT, incase of Scenario 2, REST call cannot be rolled back, incase if any exception occurs at DB call.
I already searched in google for above, I found some solutions like we need to use something like Pub-Sub model system seems, BUT I could not able to get that concept to my head clearly.
I will be glad if someone could able to provide solution for SCENARIO 2. How Other Ecommerce businesses handling their transactions effectively, I guess my query related to some Architecture design.. Please advice some good architecture approach to solve above Transaction issue. Do you think using some Messaging system like Kafka will solve above issue..? FYI, currently, my application is Monolithic, shall I use Microservices? Do I need to use two-phase-commit or Sagas will solve my problem? Does Sagas can be used for Monolithic application?
EDIT:
Regarding RestCall: I am actually making a Real Transaction using BrainTree, which is a Rest Call.
Can you elaborate what are you achieving from rest call? Are you updating any data that will be used by the DB call?
If the 2 calls are independent, will the order be of importance? Since db call will be committed at the end of method itself
Related
At what point does the spring webflux do the subscription? Everywhere I have read that there must be a subscription otherwise no change happens. In my short time with Spring Webflux, I have never seen a subscribe() neither in the controller or services.
My doubt is also when using flatMap(), map(),... etc.. at what point does the subscription take place?
What I have read does not really resolve my doubts.
public Flux method(){
....
myFlux.flatMap(data -> {
....
}).flatMap(e -> { .... });
}
I know this is an asynchronous issue, but each flatMap runs at the same time?...and so sometimes some data I have noticed is null.
It's the framework (spring-webflux) that subscribes to the returned Mono or Flux. For example if you use Netty (that's the default), then subscription happens here based on my debugging:
https://github.com/reactor/reactor-netty/blob/db27625064fc78f8374c1ef0af3160ec3ae979f4/reactor-netty-http/src/main/java/reactor/netty/http/server/HttpServer.java#L962
Also, this article might be of help to understand what happens when:
https://spring.io/blog/2019/03/06/flight-of-the-flux-1-assembly-vs-subscription
You need to call a .subscribe() or block() function after your flatmap. Here's an example.
Assuming that myFlux is of type Flux, the following will execute the subscription based on the example above
myFlux.subscribe(System.out::println);
Here's an explanation on a separate StackOverflow thread.
But in your method function, you are returning a Flux object - so it's up to the consumer of the method() function how it wants to subscribe to the Flux. You shouldn't be trying to subscribe to the Flux from within
The answer is: it depends.
For example, if this is a Spring Controller method, then it is the framework itself that subscribes to the Mono or Flux.
If it is a method that is triggered from time to time by a Scheduler, then you must explicitly subscribe to the Mono or Flux, otherwise, no processing will take place.
This means that if your application only exposes a REST API and no processing need to be triggered in any other way, then it is very likely that you will never need to explicitly subscribe to a Mono or Flux because Spring will take care of that by you.
I have a controller which calls three services A,B and C and these services calls their own DAO to perform insertion into the database.
The problem is if for example something goes wrong with service C, then A and B still persists in the database. I want that if anything goes wrong with the any of the service then the previous database operations performed by the other services should be able to rollback. How do I achieve this?
#PostMapping('/data')
public String insertData(#RequestBody String data){
A.insert(data);
B.insert(data);
C.insert(data);
return data;
}
You have two options:
Transaction at controller level - simple, easy to introduce - on the other hand, long lasting transactions.
Saga design pattern - complicated, but deals with all the cases.
I am using Hibernate4 but not Spring. In the application I am developing I want to log a record of every Add, Update, Delete to a separate log table. As it stands at the moment my code does two transactions in sequence, and it works, but I really want to wrap them up into one transaction.
I know Hibernate does not support nested transactions, only in conjunction with Spring framework. I´ve read about savepoints, but they´re not quite the same thing.
Nothing in the standards regarding JPA and JTA specification has support for nested transactions.
What you most likely mean with support by spring is #Transactional annotations on multiple methods in a call hierarchie. What spring does in that situation is to check is there an ongoing transaction if not start a new one.
You might think that the following situation is a nested transaction.
#Transactional
public void method1(){
method2(); // method in another class
}
#Transactional(propagation=REQUIRES_NEW)
public void method2(){
// do something
}
What happens in realitiy is simplified the following. The type of transactionManager1 and transactionManager2 is javax.transaction.TransactionManager
// call of method1 intercepted by spring
transactionManager1.begin();
// invocation of method1
// call of method 2 intercepted by spring (requires new detected)
transactionManager1.suspend();
transactionManager2.begin();
// invocation of method2
// method2 finished
transactionManager2.commit();
transactionManager1.resume();
// method1 finished
transactionManager1.commit();
In words the one transaction is basically on pause. It is important to understand this. Since the transaction of transactionManager2 might not see changes of transactionManager1 depending on the transaction isolation level.
Maybe a little background why I know this. I've written a prototype of distributed transaction management system, allowing to transparently executed methods in a cloud environment (one method gets executed on instance, the next method might be executed somewhere else).
I am working on a new Spring MVC based application.
I have multiple flows where the controller will make request to business manager and further business manager will talk to DAO layer to retrieve data.
There can be possible cases where I don't get data back from the DAO.
I want to understand what is best way to deal with this situation.
1) When ever there is no data retrieved for a query then Throw back Custom Exception like 'Content Not Found' from DAO layer to Business Layer and then to Controller and let controller decide what to do.
2) Return blank/null Pojo object back to business manager and let manager throw the exception to Controller.
3) Controller receives null/blank from Manager and decides what to do with that.
I am finding 1st approach better as when the exception is thrown i have complete stack trace to understand where exactly the problem occurred but on downside I will end up cluttering my code with Exception in the signatures.
Number 3 will leave the code clean but I wont be able to pin point where exactly the data retrieval failed as there can be multiple calls to DAO from Business Layer.
Throw an exception on the level where the situation of not having matching records (in other words, no data to be processed) actually is exceptional.
This largely depends on the specifics of your domain, but it's often the best idea to simply return an empty container object from the DAO if there was no matching object in the database. That is: an Collections.emptyList(), Optional.empty() or something with similar semantics. Under no circumstances return null, it's 2015 after all.
If having no matching data is an exceptional situation in your business domain, translate that to a specific exception in the service layer and let the controller handle that by translating again: into an error HTML page, some specific XML or JSON response or whatever the interface your users use to interact with your system.
The DAO layer executes queries and returns the results. It doesn't care about the results, so "nothing found" cannot be an exceptional situation in the DAO layer. It can be in the business layer, but it doesn't have to.
I wont be able to pin point where exactly the data retrieval failed
If your use case is http://server/something/2 and something 2 doesn't exist in the database, then there simply is no failure on the server side. So if there is no exception, or only one in the controller, then you can be pretty confident that no data is returned to the client because no data exists.
I would suggest you to throw custom exceptions at each layer. Each layer should be aware of exception handling.
Its beautifully explained in the below link.
Handling Dao exceptions in service layer
I have a little bit trouble with Integration Test and the transactions.
I have a Rest Service System. Behind all I have a JPA-Repository, with a Postgres database. Now to test them I build JunitTest where I made the calls on the System. The test loads the web-context and an other xy-context where I have the configuration of security and database connections. On the test method I have the #Transactional annotation.
The test makes 2 requests (This is only one example I have more of similar scenarios on other Object):
insert a new user
on this user create a Group and after bind this to the user
The test makes the first call, and returns me a id where I use to perform the second call.
The second call take the id and make the post and there I have several problems.
Details of the second call:
Test make a post on a controller
Controller takes the request and forward it to the Service
Service method (with #Transactional) take the request and do:
a research to find the inserted user
insert a group object
update the user with the groupId (generated on point 2)
Now one of the problems I had, it was a AccessDeniedException on point 3.1, because I have also ACL, and I have to check if there are enough permissions.
One of the things that I tried to do is to set:
#Transactional(propagation=Propagation.REQUIRES_NEW)
on the Service Method.
What I get after is the result that the AccessDeniedException was disappeared but the research at 3.1 gives me empty result (the research is ok, because on other scenario I have correct results), but is strange because the first post was ok, and how I understand Spring handles the #Transactions and "commits" to database so that a commit is performed when a transaction is closed. This brings me to an other idea to try: remove the #Transaction annotation on the test, but when i made this, then the database has all the data of this scenario until the end of the tests session (If you have a lot of test this is not desirable), and this is not a very good thing.
Now I wrote a little bit where are my doubts, and problems without posting a lot of code and of privacy problems, but on request I can post little pieces of codes.
It's also probable that the approach is incorrect.
The questions are:
-how can I make this service work?
-It's the correct way to set (propagation=Propagation.REQUIRES_NEW)?
-It's the correct way to set #Transactional on the test? (eventually with a Rollback?)
Txs a lot.
To make test I use mockMvc to make the request and some annotation on the class:
#RunWith(SpringJUnit4ClassRunner.class)
#WebAppConfiguration
#ContextConfiguration(locations = { ..... })
#Transactional
public class tests {
#Test
public void aTest(){
mockMvc = MockMvcBuilders
.webAppContextSetup(webApplicationContext)
.addFilter(new DelegatingFilterProxy("springSecurityFilterChain", webApplicationContext), "/*")
.build();
mockMvc.perform(post(.....))
}
}
To answer your question:
It's the correct way to set #Transactional on the test? (eventually with a Rollback?)
No really, but you can. Because you are doing two requests, the second depends on the first, and http request will not remember your transaction, if you insist to do it, you need flush your session between requests.
It's the correct way to set (propagation=Propagation.REQUIRES_NEW)?
It depends. REQUIRES_NEW means it will start new transaction, the influence is that everything in the existing transaction will be invisible in the new transaction, because the old one is not commited yet! if this server is the entry point of the transaction, it makes no difference, but be aware of the visibility problem.
how can I make this service work?
OK, forget what my answers of the previous questions. If I have to write the test, I will do it this way:
The test is not transactional. If you are doing integration test, you don't need to rollback single tests. If you wanna rollback the commit, then you are having wrong task case, you should have two test cases insert user and update group.
3 parts of the test
Send request to insert user and get the ID (single transaction)
Send request to update group(another transaction)
Send request to fetch the user and do the checks.
Hope this can help you.