Save and lock entity with Hibernate - spring

I'm looking for a way to save and immediately lock an entity on a DB in order to avoid that other thread access the entity before the thread creator ends.
I'm using Hibernate 4.3.11 and Spring 4.2.5.
Thanks in advance.

Although there is lock mode - LockMode.WRITE - but as the documentation states
A WRITE lock is obtained when an object is updated or inserted. This
lock mode is for internal use only and is not a valid mode for load()
or lock() (both of which throw exceptions if WRITE is specified)..
If it's just that you are only inserting rows then you cannot specifically lock the database rows using hibernate as the rows are not yet committed.
The moment your code (hibernate or without) inserts rows in database and not yet commits - there are transactional locks held which gets released on transaction commit. The nature of locks and the manner in which this internally happens is database specific. However if you are interested in locking some rows (already existing) , then you
can query the data using
session.get(TestEntity.class, 1, LockMode.PESSIMISTIC_WRITE);
This will hold a pessimistic lock (typically by issuing SELECT .... FOR UPDATE) for the duration of transaction and no other thread/transaction can modify the data on which lock has been taken.

A possible way should be increase transaction level to serializable.
This level ensure data is locked until is not used in other transaction.

Hibernate offer's two types of locks Optimistic and Pessimistic. Its straight forward.
1)Optimistic uses versioning where in it will have a version column in the database and check it before it updates or else throw the exception
2)Pessimistic is some thing like a database handles the locking on that row and it will get released after the operation is completed, there are few options are there which is similarly like how you imagine like read lock, write lock
https://docs.jboss.org/hibernate/orm/4.0/devguide/en-US/html/ch05.html

If you are using PostgreSQL I think the below example works:
#Query(value = """with ins_artist as (
insert into artist
values (301, 'Whoever1')
returning *
) select artist_id
from ins_artist
for update""", nativeQuery = true)
#Transactional(propagation = Propagation.REQUIRED)
Long insertArtist(); // returns artist ID
PS: I ran this query on https://postgres.devmountain.com/ . But it would need testing on a Java app.

Related

How to get updated objects after flush() in the same transaction (Hibernate/ Spring boot)

I have a list of ~10 000 objects.
I am trying to call an mysql update query (procedure) and then to get the updated objects inside same transaction.
Can this be achieved ?
When I call a delete statement + flush(), hibernate retrieves me correct objects (deleted objects are missing)
But when I try update statement + flush(), hibernate retrieves me the initial unchanged objects.
#Transactional
void test() {
//...
em.createQuery("delete from StorefrontProduct sp where sp in (:storefrontProducts)")
.setParameter("storefrontProducts", storefrontProductsToDelete)
.executeUpdate();
// example
em.createQuery("update StorefrontProduct sp set sp.orderIndex=0 where sp.id=90")
.executeUpdate();
em.flush();
//Simple JPA query
List<StorefrontProduct> result = repository.findAllByPreviousOrderIndexIsNotNull();
//additional code....
}
After running the code from above and putting a breakpoint after findAll call, provided objects from 1-st query were deleted and flushed, but the update query was not flushed.
That is known counterintuitive behaviour of Hibernate.
First of all, em.flush() call might be superfluous if flush mode set to AUTO (in that case Hibernate automatically synchronises persistence context (session-level cache) with underlying database prior executing update/delete queries).
Delete and successive Select case:
you issues delete then select, since select does not see deleted records anymore you do not see deleted records in resultset, however if you call findById you may find deleted records.
Update and successive Select case:
you issues update then select, when processing resultset Hibernate sees both records stored in database and records stored in persistence context and it assumes that persistence context is a source of truth, that is the reason why you see "stale" data.
There are following options to mitigate that counterintuitive behaviour:
do not perform direct updates, use "slow" find/save API instead
either detach or refresh stale entities after direct update, em.clear() may also help, however it completely cleans up persistence context, which might be undesirable

How to create a threadsafe insert or update with hibernate.(Dealing with optimistic locking)

My problem.
I have a simple table, token. It has only a few attributes. id, token,username,version and a expire_date.
I have a rest service that will create or update a token. So when a user request a token, I would like to check if the user (using the username) already has an entry, if yes, then simply update the expire_date and return, if there is no entry create a new one. The problem is that if I create a test with a few concurrent users(using a jmeter script), that call the rest service, hibernate will very fast
throw a StaleObject exception because what happens is: Thread A will select the row for the user, change the expire_date, bump the version, meanwhile thread B will do the same but will actually manage to commit before thread A commits. Now when thread A will commit hibernate detects the version change and will throw the exception and rollback. All works as documented.
But what I would like to happen, is that Thread B will wait for Thread A to finish before doing it's thing.
What is the best way to solve this? Should I use java concurrency package and implement locks? Or is it a better option to implement a custom jpa isolation level?
Thanks
If you are using JEE server, EJB container will do it for you using #Singleton.
I think the best way is using JPA lock to acquire lock on resources you are currently updating(row lock). Don't push your effort to implement row locking using java concurrency by your self. Ex: it will be much easier to lock row contain user "john.doe" in dbms level rather finding a way locking a specific row using concurrency in your code.

Difference between LockModeType Jpa

I am confused about the working of LockModeTypes in JPA:
LockModeType.Optimistic
it increments the version while committing.
Question here is : If I have version column in my entity and if I don't specify this lock mode then also it works similarly then what is the use of it?
LockModeType.OPTIMISTIC_FORCE_INCREMENT
Here it increments the version column even though the entity is not updated.
but what is the use of it if any other process updated the same row before this transaction is committed? this transaction is anyways going to fail. so what is the use of this LockModeType.
LockModeType.PESSIMISTIC_READ
This lock mode issues a select for update nowait(if no hint timeout specified)..
so basically this means that no other transaction can update this row until this transaction is committed, then its basically a write lock, why its named a Read lock?
LockModeType.PESSIMISTIC_WRITE
This lock mode also issues a select for update nowait (if no hint timeout specified).
Question here is what is the difference between this lock mode and LockModeType.PESSIMISTIC_READ as I see both fires same queries?
LockModeType.PESSIMISTIC_FORCE_INCREMENT
this does select for update nowait (if no hint timeout specified) and also increments the version number.
I totally didn't get the use of it.
why a version increment is required if for update no wait is there?
I would first differentiate between optimistic and pessimistic locks, because they are different in their underlying mechanism.
Optimistic locking is fully controlled by JPA and only requires additional version column in DB tables. It is completely independent of underlying DB engine used to store relational data.
On the other hand, pessimistic locking uses locking mechanism provided by underlying database to lock existing records in tables. JPA needs to know how to trigger these locks and some databases do not support them or only partially.
Now to the list of lock types:
LockModeType.Optimistic
If entities specify a version field, this is the default. For entities without a version column, using this type of lock isn't guaranteed to work on any JPA implementation. This mode is usually ignored as stated by ObjectDB. In my opinion it only exists so that you may compute lock mode dynamically and pass it further even if the lock would be OPTIMISTIC in the end. Not very probable usecase though, but it is always good API design to provide an option to reference even the default value.
Example:
`LockModeType lockMode = resolveLockMode();
A a = em.find(A.class, 1, lockMode);`
LockModeType.OPTIMISTIC_FORCE_INCREMENT
This is a rarely used option. But it could be reasonable, if you want to lock referencing this entity by another entity. In other words you want to lock working with an entity even if it is not modified, but other entities may be modified in relation to this entity.
Example: We have entity Book and Shelf. It is possible to add Book to Shelf, but book does not have any reference to its shelf. It is reasonable to lock the action of moving a book to a shelf, so that a book does not end up in another shelf (due to another transaction) before end of this transaction. To lock this action, it is not sufficient to lock current book shelf entity, as the book does not have to be on a shelf yet. It also does not make sense to lock all target bookshelves, as they would be probably different in different transactions. The only thing that makes sense is to lock the book entity itself, even if in our case it does not get changed (it does not hold reference to its bookshelf).
LockModeType.PESSIMISTIC_READ
this mode is similar to LockModeType.PESSIMISTIC_WRITE, but different in one thing: until write lock is in place on the same entity by some transaction, it should not block reading the entity. It also allows other transactions to lock using LockModeType.PESSIMISTIC_READ. The differences between WRITE and READ locks are well explained here (ObjectDB) and here (OpenJPA). If an entity is already locked by another transaction, any attempt to lock it will throw an exception. This behavior can be modified to waiting for some time for the lock to be released before throwing an exception and roll back the transaction. In order to do that, specify the javax.persistence.lock.timeout hint with the number of milliseconds to wait before throwing the exception. There are multiple ways to do this on multiple levels, as described in the Java EE tutorial.
LockModeType.PESSIMISTIC_WRITE
this is a stronger version of LockModeType.PESSIMISTIC_READ. When WRITE lock is in place, JPA with the help of the database will prevent any other transaction to read the entity, not only to write as with READ lock.
The way how this is implemented in a JPA provider in cooperation with underlying DB is not prescribed. In your case with Oracle, I would say that Oracle does not provide something close to a READ lock. SELECT...FOR UPDATE is really rather a WRITE lock. It may be a bug in hibernate or just a decision that, instead of implementing custom "softer" READ lock, the "harder" WRITE lock is used instead. This mostly does not break consistency, but does not hold all rules with READ locks. You could run some simple tests with READ locks and long running transactions to find out if more transactions are able to acquire READ locks on the same entity. This should be possible, whereas not with WRITE locks.
LockModeType.PESSIMISTIC_FORCE_INCREMENT
this is another rarely used lock mode. However, it is an option where you need to combine PESSIMISTIC and OPTIMISTIC mechanisms. Using plain PESSIMISTIC_WRITE would fail in following scenario:
transaction A uses optimistic locking and reads entity E
transaction B acquires WRITE lock on entity E
transaction B commits and releases lock of E
transaction A updates E and commits
in step 4, if version column is not incremented by transaction B, nothing prevents A from overwriting changes of B. Lock mode LockModeType.PESSIMISTIC_FORCE_INCREMENT will force transaction B to update version number and causing transaction A to fail with OptimisticLockException, even though B was using pessimistic locking.
LockModeType.NONE
this is the default if entities don't provide a version field. It means that no locking is enabled conflicts will be resolved on best effort basis and will not be detected. This is the only lock mode allowed outside of a transaction

findBy appear to update the database

I have the following code:
println "######## RUNNING ProfessionaCustomer - ${pcCounter} under ${accountCustomer.customerNumber} Professional SQLid ${it.id}"
def professionalCustomerId = it.customerId
def professionalCustomer = ProfessionalCustomer.findByCustomerNumber(professionalCustomerId)
I have SQL logging on and I get:
######## RUNNING ProfessionaCustomer - 31 under 106450 Professional SQLid 100759
Hibernate: update base_domain set version=?, account_name=?, address_line1=?, address_line2=?, city=?, customer_number=?, date_created=?, disabled=?, last_updated=?, postal_code=?, primary_phone=?, state_or_province=? where id=? and version=?
Hibernate: update base_domain set version=?, address1=?, address2=?, city=?, customer_number=?, date_created=?, disabled=?, first_name=?, last_name=?, last_updated=?, middle_name=?, phone_number=?, postal_code=?, state=? where id=? and version=?
Hibernate: insert into account_customer_professionals (account_customer_id, professional_customer_id) values (?, ?)
Hibernate: select this_.id as id1_3_0_, this_.version as version2_3_0_, this_.address1 as address70_3_0_, this_.address2 as address71_3_0_, this_.city as city7_3_0_, this_.customer_number as customer8_3_0_, this_.date_created as date_cre9_3_0_, this_.disabled as disable10_3_0_, this_.first_name as first_n19_3_0_, this_.last_name as last_na20_3_0_, this_.last_updated as last_up11_3_0_, this_.middle_name as middle_72_3_0_, this_.phone_number as phone_n73_3_0_, this_.postal_code as postal_12_3_0_, this_.state as state74_3_0_ from base_domain this_ where this_.class='com.eveo.nplate.model.ProfessionalCustomer' and this_.customer_number=? limit ?
Which is updating the DB. This would explain why this is so slow, but I can't see any reason for this to happen.
Why would 'findBy' cause an update?
Hibernate doesn't immediately execute creates, updates, or deletes until it thinks it has to - it waits as long as possible (although it's rather pessimistic) and only flushes these changes when you tell it to, or when it thinks it needs to. In general the only time it will flush without an explicit call is when running queries. This is because any of the new instances, updated instances, and deleted instances that are in-memory (cached in the Hibernate Session, the 1st-level cache) could affect the query results, so they must be flushed to the database so you get the proper results for your query.
One exception to this is calling save() on a new instance. Grails flushes this because typically the id is assigned by the database, either via an auto-increment column or a sequence. To ensure that the in-memory state is the same as the database, it flushes the save() call so it can retrieve the id and set it in the instance. But if you retrieve a persistence instance (e.g. with a get() call, or with a criteria query, finder, etc.) and modify it, calling save() on that does not get automatically flushed. The same goes for delete() calls - not flushed.
Think of delete() and save() calls on persistent instances as messages to Hibernate that the action should be performed "eventually".
So when you execute a finder, or a criteria, "where", or HQL query, Hibernate will flush any un-flushed changes for you. If you don't want that to happen (e.g. in a custom domain class validator closure) you can run the query in a separate session, e.g. with the withNewSession method.
If you don't flush the session at all, either explicitly on the Session instance or by adding flush:true to a save or delete call, the session will be flushed, since Grails registers an OpenSessionInView interceptor that starts a session at the beginning of each request, and flushes and closes it at the end. This helps with lazy loading; since there's a session open and bound to a ThreadLocal in a known location, Hibernate and GORM (via Spring's HibernateTemplate) can use that open session to retrieve lazy-loaded collections and instances on-demand after the query runs.
Note also that you do not need to flush in a transaction. The transaction manager is a Spring HibernateTransactionManager that flushes before committing.
Probably there was some transaction in the session that was not persisted in the database.
When you ran the findBy hibernate took advantage of the connection to run the two queries. I believe this is what happened.

Achieving ACID properties using JDBC?

First of all i would like to confirm is it the responsibility of developer to follow these properties or responsibilty of transaction Apis like JDBC?
Below is my understanding how we achieve acid properties in JDBC
Atomicity:- as there is one transaction associated with connection, so we do commit or rollback , there are no partial updation.Hence achieved
Consitency:- when some data integrity constraint is voilated (say some check constraint) then sqlexception will be thrown . Then programmer acieve the consistent database by rollbacking the transaction?
one question on above say we do transaction1 and sql excpetion is thrown during transaction 2 as explained above . Now we catch the exception and do the commit will first transaction be commited?
Isolation:- Provided by JDBC Apis.But this leads to the problem of concurrent update . so it has be dealt manually right?
Durability:- Provided by JDBC Apis.
Please let me if above understanding is right?
ACID principles of transactional integrity are implemented by the database not by the API (like JDBC) or by the application. Your application's responsibility is to choose a database and a database configuration that supports whatever transactional integrity you need and to correctly identify the transactional boundaries in your application.
When an exception is thrown, your application has to determine whether it is appropriate to rollback the entire transaction or to proceed with additional processing. It may be appropriate if your application is processing orders from a vendor, for example, to process the 99 orders that succeed and log the 1 order that failed somewhere for users to investigate. On the other hand, you may reject all 100 orders because 1 failed. It depends what your application is doing.
In general, you only have one transaction open at a time (or, more accurately, one transaction per connection). So if you are working in transaction 2, transaction 1 by definition has already completed-- it was either committed or rolled back previously. Exceptions thrown in transaction 2 have no impact on transaction 1.
Depending on the transaction isolation level your application requests (and the transaction isolation levels your database supports) as well as the mechanics of your application, lost updates are something that you may need to be concerned about. If you set your transaction isolation level to read committed, it is possible that you would read a value as 'A' in transaction 1, wait for a user to do something, update the value to 'B', and commit without realizing that transaction 2 updated the value to 'C' between the time you read the data and the time you wrote the data. This may be a problem that you need to deal with or it may be something where it is fine for the last person to update a row to "win".
Your database, on the other hand, should take care of the automatic locking that prevents two transactions from simultaneously updating the same row of the same table. It may do this by locking more than is strictly necessary but it will serialize the updates somehow.

Resources