I have a controller which calls three services A,B and C and these services calls their own DAO to perform insertion into the database.
The problem is if for example something goes wrong with service C, then A and B still persists in the database. I want that if anything goes wrong with the any of the service then the previous database operations performed by the other services should be able to rollback. How do I achieve this?
#PostMapping('/data')
public String insertData(#RequestBody String data){
A.insert(data);
B.insert(data);
C.insert(data);
return data;
}
You have two options:
Transaction at controller level - simple, easy to introduce - on the other hand, long lasting transactions.
Saga design pattern - complicated, but deals with all the cases.
Related
I have a table which has a list of lookup values max 50 rows.
Currently, I am querying this table every time to look for a particular value which is not efficient.
So, I am planning to optimize this by loading all the value at once as a List from the repository using findAll.
List<CardedList> findAll();
My question here is
Class A -> Class B - Class B which holds this repository. Will it query findAll everytime when Class A calls Class B?
Class A {
//foreach items in the list call Class B
b.someMethod();
}
Class B {
#Autowired
CardedListRepository cardRepo;
someMethod() {
cardRepo.findAll();
}
}
What is the best way to achieve this?
If it is just 50 rows you could cache them in an instance variable of a service and check like this:
Class B {
#Autowired
CardedListRepository cardRepo;
List<CardedList> cardedList = new ArrayList<>();
someMethod() {
if(cardedList.isEmpty())
{
cardedList = cardRepo.findAll();
}
// do others in someMethod
}
The proposed "solution" by #Juliyanage Silva (to "cache" the findAll query result as a simple instance variable of service B) can be very dangerous and should not be implemented before checking very carefully that it works under all circumstances.
Just imagine the same service instance being called from a subsequent transaction - you would end up with a (probably outdated) list of detached entities.
(e.g. leading to LazyInitializationExceptions when accessing not initialized properties, etc.)
Hibernate already provides several caching mechanisms, as e.g. a standard first level cache, which avoids unnecessary DB round trips when looking for an already loaded entity by ID within the same transaction.
However, query results (as from findAll) are not cached by default, as explained in the documentation:
Caching of query results introduces some overhead in terms of your applications normal transactional processing. For example, if you cache results of a query against Person, Hibernate will need to keep track of when those results should be invalidated because changes have been committed against any Person entity.
That, coupled with the fact that most applications simply gain no benefit from caching query results, leads Hibernate to disable caching of query results by default.
To enable the Hibernate query cache, the second level cache needs to be configured. To prevent ending up with stale entries when having multiple application instances, this calls for a distributed cache (like Hazelcast or EhCache).
There are also various discussions on using springs caching mechanisms for this purpose. However, there are also various pitfalls when it comes to caching collections. And when running multiple application instances you may need a distributed cache or another global invalidation mechanism, too.
How to add cache feature in Spring Data JPA CRUDRepository
Spring Cache with collection of items/entities
Spring Caching not working for findAll method
So depending on your use-case, it may be the easiest to just avoid unnecessary calls of service B by storing the result in a local variable within the calling method of service A.
I am using Spring Boot and my application is just Monolithic for now, may switch to microservices later.
SCENARIO 1: Here My DB call Does NOT depend on REST Response
#Transactional
class MyService {
public void DBCallNotDependsOnRESTResponse(){
//DB Call
//REST Call, This restcall gives response like 200 "successfull"
}
}
SCENARIO 2: Here My DB call depends on REST Response
#Transactional
class MyService {
public void DBCallDependsOnRESTResponse(){
//REST Call, making a Real Transaction using BrainTree
//DB Call, HERE DB CALL DEPENDS ON REST RESPONSE
}
}
In case of Scenario 1, I have no issues as DB gets rolled back incase REST fails.
BUT, incase of Scenario 2, REST call cannot be rolled back, incase if any exception occurs at DB call.
I already searched in google for above, I found some solutions like we need to use something like Pub-Sub model system seems, BUT I could not able to get that concept to my head clearly.
I will be glad if someone could able to provide solution for SCENARIO 2. How Other Ecommerce businesses handling their transactions effectively, I guess my query related to some Architecture design.. Please advice some good architecture approach to solve above Transaction issue. Do you think using some Messaging system like Kafka will solve above issue..? FYI, currently, my application is Monolithic, shall I use Microservices? Do I need to use two-phase-commit or Sagas will solve my problem? Does Sagas can be used for Monolithic application?
EDIT:
Regarding RestCall: I am actually making a Real Transaction using BrainTree, which is a Rest Call.
Can you elaborate what are you achieving from rest call? Are you updating any data that will be used by the DB call?
If the 2 calls are independent, will the order be of importance? Since db call will be committed at the end of method itself
I'm a new Spring user.
I have a question about scope and transaction.
For example, there's a service:
<bean id="bankInDaoService" class="service.dao.impl.UserDaoServiceImpl">
Let's say there are 2 people who want to bank-in at the same time.
And I already put #Transactional above for Hibernate transaction the method for bank-in purpose.
My questions are:
Since default Spring scope is singleton. Will these 2 people share the same values. (person 1 bank-in 500, person 2 bank-in 500)?
Will the #Transactional be effective? I mean let the first person finishes bank-in, and then person 2.
I'll be really thankful for your help.
You have wrongly understood the useage of #Transactional annotation.
#Transactional annotation is used in case where you want to get all or none of your transactions to be successful. If any of the transaction fails then other successful transaction will be rolled back. It is not for synchronisation.
If you have registration page where you take input for 10 fields and 5 are for table user and 5 are for table company and you are inseting both records from a single service function. At that time you should use #Transactional annotation. If insertion is successful in user table and fails in company table then the user table record will be rolled back.
Hope this helps you. Cheers.
You are correct that by default Spring beans are singletons. But this won't be a problem unless your implementation modifies some internal state on each invocation (which would be rather odd - typically a service method will just work with the parameters it's been given).
As I just alluded to, each service method invocation will have its own parameters; i.e.:
deposit(person1_ID, 500)
deposit(person2_ID, 750)
As you've said "at the same time" we can safely assume we have a multi-threaded server that is handling both these people simultaneously, one per thread. Method parameters are placed on the stack for any given thread - so as far as your service is concerned, there is absolutely no connection/chance of corruption between the two people's deposits.
Now turning to the #Transactional annotation: Spring uses "aspects" to implement this behaviour, and again these will be applied separately to each thread of execution, and are independent.
If you're looking for #Transactional to enforce some kind of ordering (for example, you want person2 to withdraw the exact amount person1 deposited) then you need to write a new method that performs both operations in sequence within the one #Transactional scope.
Imagine you have large amount of data in database approx. ~100Mb. We need to process all data somehow (update or export to somewhere else). How to implement this task with good performance ? How to setup transaction propagation ?
Example 1# (with bad performance) :
#Singleton
public ServiceBean {
procesAllData(){
List<Entity> entityList = dao.findAll();
for(...){
process(entity);
}
}
private void process(Entity ent){
//data processing
//saves data back (UPDATE operation) or exports to somewhere else (just READs from DB)
}
}
What could be improved here ?
In my opinion :
I would set hibernate batch size (see hibernate documentation for batch processing).
I would separated ServiceBean into two Spring beans with different transactions settings. Method processAllData() should run out of transaction, because it operates with large amounts of data and potentional rollback wouldnt be 'quick' (i guess). Method process(Entity entity) would run in transaction - no big thing to make rollback in the case of one data entity.
Do you agree ? Any tips ?
Here are 2 basic strategies:
JDBC batching: set the JDBC batch size, usually somewhere between 20 and 50 (hibernate.jdbc.batch_size). If you are mixing and matching object C/U/D operations, make sure you have Hibernate configured to order inserts and updates, otherwise it won't batch (hibernate.order_inserts and hibernate.order_updates). And when doing batching, it is imperative to make sure you clear() your Session so that you don't run into memory issues during a large transaction.
Concatenated SQL statements: implement the Hibernate Work interface and use your implementation class (or anonymous inner class) to run native SQL against the JDBC connection. Concatenate hand-coded SQL via semicolons (works in most DBs) and then process that SQL via doWork. This strategy allows you to use the Hibernate transaction coordinator while being able to harness the full power of native SQL.
You will generally find that no matter how fast you can get your OO code, using DB tricks like concatenating SQL statements will be faster.
There are a few things to keep in mind here:
Loading all entites into memory with a findAll method can lead to OOM exceptions.
You need to avoid attaching all of the entities to a session - since everytime hibernate executes a flush it will need to dirty check every attached entity. This will quickly grind your processing to a halt.
Hibernate provides a stateless session which you can use with a scrollable results set to scroll through entities one by one - docs here. You can then use this session to update the entity without ever attaching it to a session.
The other alternative is to use a stateful session but clear the session at regular intervals as shown here.
I hope this is useful advice.
I looked at the example on http://solitarygeek.com/java/developing-a-simple-java-application-with-spring/comment-page-1#comment-1639
I'm trying to figure out why the service layer is needed in the first place in the example he provides. If you took it out, then in your client, you could just do:
UserDao userDao = new UserDaoImpl();
Iterator users = userDao.getUsers();
while (…) {
…
}
It seems like the service layer is simply a wrapper around the DAO. Can someone give me a case where things could get messy if the service layer were removed? I just don’t see the point in having the service layer to begin with.
Having the service layer be a wrapper around the DAO is a common anti-pattern. In the example you give it is certainly not very useful. Using a service layer means you get several benefits:
you get to make a clear distinction between web type activity best done in the controller and generic business logic that is not web-related. You can test service-related business logic separately from controller logic.
you get to specify transaction behavior so if you have calls to multiple data access objects you can specify that they occur within the same transaction. In your example there's an initial call to a dao followed by a loop, which could presumably contain more dao calls. Keeping those calls within one transaction means that the database does less work (it doesn't have to create a new transaction for every call to a Dao) but more importantly it means the data retrieved is going to be more consistent.
you can nest services so that if one has different transactional behavior (requires its own transaction) you can enforce that.
you can use the postCommit interceptor to do notification stuff like sending emails, so that doesn't junk up the controller.
Typically I have services that encompass use cases for a single type of user, each method on the service is a single action (work to be done in a single request-response cycle) that that user would be performing, and unlike your example there is typically more than a simple data access object call going on in there.
Take a look at the following article:
http://www.martinfowler.com/bliki/AnemicDomainModel.html
It all depends on where you want to put your logic - in your services or your domain objects.
The service layer approach is appropriate if you have a complex architecture and require different interfaces to your DAO's and data. It's also good to provide course grained methods for clients to call - which call out to multiple DAO's to get data.
However, in most cases what you want is a simple architecture so skip the service layer and look at a domain model approach. Domain Driven Design by Eric Evans and the InfoQ article here expand on this:
http://www.infoq.com/articles/ddd-in-practice
Using service layer is a well accepted design pattern in the java community. Yes, you could straightaway use the dao implementation but what if you want to apply some business rules.
Say, you want to perform some checks before allowing a user to login into the system. Where would you put those logics? Also, service layer is the place for transaction demarcation.
It’s generally good to keep your dao layer clean and lean. I suggest you read the article “Don’t repeat the DAO”. If you follow the principles in that article, you won’t be writing any implementation for your daos.
Also, kindly notice that the scope of that blog post was to help beginners in Spring. Spring is so powerful, that you can bend it to suit your needs with powerful concepts like aop etc.
Regards,
James