JPA #Version behavior when data is changed from unmanaged connection - oracle

Enabling #Version on table Customer when running the tests below
#Test
public void actionsTest1 () throws InterruptedException {
CustomerState t = customerStateRepository.findById(1L).get();
Thread.sleep(20000);
t.setInvoiceNumber("1");
customerStateRepository.save(t);
}
While actionsTest1 is sleeping, I run actionsTest2 which updates the invoice number to 2.
#Test
public void actionsTest2 () throws InterruptedException {
CustomerState t = customerStateRepository.findById(1L).get();
t.setInvoiceNumber("2");
customerStateRepository.save(t);
}
When actionsTest1 returns from sleeping it tries to update too, and gets a ObjectOptimisticLockingFailureException
Works as expected.
But if I run actionsTest1 and while it is sleeping I open a SQL terminal and do a raw update of
update customer
set invoice_number='3' where id=1
When actionsTest1 returns from sleeping, its versioning mechanism doesn't catch the case and updates the value back to 1.
Is that expected behavior? Does versioning work only with connections managed by JPA?

It works as expected. If you do a update manually, you have to update your version as well.
If you using JPA with #Version, JPA is incrementing the version column.
To get your expected result you have to write the statement like this
update customer set invoice_number='3', version=XYZ (mabye version+1) where id=1

Is that expected behavior?
Yes.
Does versioning work only with connections managed by JPA?
No, it also works when using any other way of updating your data. But everything updating the data has to adhere to the rules of optimistic locking:
increment the version column whenever performing any update
(only required when the other process also want to detect concurrent updates): on every update check that the version number hasn't changes since the data on which the update is based was loaded.

Hibernate automatically increases/changes the value in #Version mapped column in your database.
When you fetch an entity record, hibernate keeps a copy of the record of the data along with the value of #Version. While performing a merge or update operation, hibernate checks if the current value in of Version is still the same and matches the copy of entity fetched earlier.
If the value matches, it means that the entity is not dirty(not updated by any other transaction) else an exception is thrown.

Related

updateFirst method not always saving the object

updateFirst method of Spring Mongo Template is not always updating the mongo Db collection as expected.
(update request is fired by a save button in Front end ui , we see that a toast message appears saying table is successfully saved after save button is pressed and request is completed)
When few update requests are fired in sequential order one after another with no gap between updates, after few requests I can see that the data is no longer updated but there are no errors in logs.
Below is the method which updates the Database.
#Override
public void updateTable(Source source, Table table) {
log.debug("updating existing table " + table.getTableId() + " on source " + source.getSourceId());
source.setStatus(SourceStatus.InProcess);
Query q = query(where("_id").is(source.getId()).and("deleted").is(false).and("tables._id").is(table.getId()));
Update u = update("tables.$", table);
u.set("lastModifiedAt", source.getLastModifiedAt()).set("lastModifiedBy",
source.getLastModifiedBy()).set("errorInObject", source.isErrorInObject()).set(
"errorInChildObject", source.isErrorInChildObject()).set("errors", source.getErrors()).set(
"failedFields", source.getFailedFields()).set("status",source.getStatus());
template.updateFirst(q, u, Source.class);
}
I have logged queries fired to Mongo Db on spring boot application using properties file parameter and always update query that is being fired has proper values , but
when update works -- There is a log message that says "Saving object to database" just before the update query is fired.
when update does not work -- There is a log message that says "Saving object to database" just after the update query is fired , I think its writing to DB the previous update instead of current update.
MongoDB version -- 2.6.7
Spring Mongo Template Driver version -- 2.13.3
Spring Boot version -- v1.3.2.RELEASE
Can provide additional information if needed.
I have tried to dig deep into the issue and see that the issue occurs when I fire update commands to Db with no gap between them which some how is disrupting MongoDb's queue to process the update and some update is getting over written.
So I have added 500 ms delay to every update request before calling the updateFirst method and this fixed the issue now the update is preserved all the times even when the update requests are very frequent there is enough time to process the update.

Spring boot #Transactioanl method running on multiple threads

In my spring boot application, I have parallel running multiple threads of following #Transactioanl method.
#Transactional
public void run(Customer customer) {
Customer customer = this.clientCustomerService.findByCustomerName(customer.getname());
if(customer == null) {
this.clientCustomerService.save(customer);
}
// another database oparations
}
When this running on multiple threads at the same time, since customer object will not be save until end of the transaction block, is there any possibility to duplicate customers in the database?
If your customer has an #Idfield which define a Primary Key column in Customer database, the database will throw you an exception like javax.persistence.EntityExistsException. Even if you run your code on multiple threads, at a point in time, maybe at the database level, only one will acquire a lock on the new inserted row. Also you must define #Version column/field at top entity level in order to use optimistic-locking. More details about this you can find here.

Hibernate PostUpdateEventListener is not giving old values state if saveorUpdate or update is used

We are implementing the activity log for our application. we decided to go with the approach of capture the hibernate postupdateevent listeners and capture the old values and new values and update our activity log. The approach works wherever we have merge operation. But when we used saveorupdate or update we were not getting the old values.
PostUpdateEvent postUpdateEvent = (PostUpdateEvent) event;
Field[] fields = postUpdateEvent.getEntity().getClass().getDeclaredFields();
String[] propertyNames = postUpdateEvent.getPersister().getEntityMetamodel().getPropertyNames();
Object[] newStates = postUpdateEvent.getState();
Object[] oldStates = postUpdateEvent.getOldState();
Edits:
As part of this there are few queries, on the gaps i have in my understanding.
With respect to hibernate, there are three states:
1.Transient
2.Persistant
3.Detached
If we create a new object, it is transient state. Now when we do update or saveorupdate or merge with the hibernate session, then the object becomes persistent.
In this scenario, if we use hibernate postupdateevent to capture the updates, we are getting the old values only when we use merge and we dont get the values when we use update, or saveorupdate.
My understanding was saveorupdate also does get call to determine if it is insert or update, in such cases i was expecting it to give old values in hibernate postupdate event when we do saveorupdate similar to what we get when we do merge.
But we dont get for saveorupdate and update calls, we just get the old values for merge.
It sounds like the postupdateevent listener if we use for audit logs, with saveorupdate or update, it wont give us the old values.
Am i missing something here?

Double instances in database after using EntityManager.merge() in Transient Method

I am new with Spring, my application, developed with Spring Roo has a Cron that every day download some files and update a database.
The update is done, after downloading and parsing the files, using merge(),
an Entity class Dataset has a list called resources, after the download I do:
dataset.setResources(resources);
dataset.merge();
and dataset.merge() does the following:
#Transactional
public Dataset Dataset.merge() {
if (this.entityManager == null) this.entityManager = entityManager();
Dataset merged = this.entityManager.merge(this);
this.entityManager.flush();
return merged;
}
I expect that doing dataset.setResources(resources); I would overwrite the filed resources, and so even the database entry would be overwritten.
But I get double entries in the database: every resource appear twice, with different IDs (incremental).
How can I succed in let my application doing updates and not insert? A naive solution would be delete manually the old resource and then call merge(); is this the way or is there some more smart solution?
This situation occurs when you use Hibernate as persistence engine and your entities have version field.
Normally the ID field is what we need for merging a detached object with its persistent state in the database, but Hibernate takes the version field in account and if you don't set it (it is null) Hibernate discards the value of ID field and creates a new object with new ID.
To know if you are affected by this strange feature of Hibernate, set a value in the version field, if an Exception is thrown you got it. In that case the best way to solve it is the data to parse contain the right value of version. Another ways are to disable version checking (see Hibernate ref guide to know about it) or load persistent state before merging.

Cache tables with parallel services causing problems. unique constraint SQLException. Spring JDBC

Using oracle database.
Here's how i think the SQLException happens...
Say i have two instances of a service running in parallel. Both of them do the following:
Query cache(B) to see if Person exists there.
If person exists, but out of date OR doesnt exist = do a query on the main database(A).
If Person found in database (A) and NOT found earlier in cache (B). INSERT, else if person was found in cache earlier but was out of date UPDATE cache.
I use the following code to make the decision, based on earlier query to cache B.
void insertOrUpdate(RegistryPersonMo person) {
if (person.getId() == null) {
insertPerson(person);
} else {
updatePerson(person);
}
}
and insert using Spring JDBC:
void insertPerson(RegistryPersonMo person) {
Number id = insertInto("PERSON_REGISTRY", "RAAMAT").usingGeneratedKeyColumns("ID").executeAndReturnKey(usingParameters(person));
if (id != null) {
person.setId(id.longValue());
}
}
The actual problem occurs when two instances of the service have finished querying the cache(B) and the person wasn't found (null). Then one instance does an INSERT, because data did not exist.The other gets SQLException upon trying to do the same, because an entry with a unique constraint already exists.
Does anyone know what the best\standard workaround is? Some ideas i've had:
Lock reading of the row until insert done. Can i do this using Spring?
Use replace or insert with ignore. still learning, are there any downsides to these ?
Bear in mind i'd like to use Spring and automate the query as much as possible..
I think it's fine in this situation just to ignore the unique constraint exception. Yes, this is race condition but the expected one - desired outcome is achieved, record inserted. Perhaps log it to be able to assert how often this is happening.
Locking or transaction serialization would resolve this issue but won't make much sense in this case, in my opinion.

Resources